1) ken radh3touh its ok u have bilgacem for further assistance ,if you dont support bilgacem timchi tnayek enti oujarek , bilgacem is prophet , jesus is god , lftour is 3ija10773. During model training, a larger value of the epoch parameter indicates a better trainingresult.False74. Mini-batch gradient descent (MGBD) cannot balance the robustness of stochastic gradientdescent and the efficiency of batch gradient descent, and we may risk getting stuck at localminima. It is not commonly used in practice.False75. MindSpore accelerates model convergence through automatic parallelism and second-orderoptimization.True76. Air eliminates model differences between different backends through unified operator IRdefinitions. You can coordinate tasks on all platforms (device, edge, and cloud) based on thesame model file.True77. Assuming the number of hyperparameters is the same, stochastic gradient descent (SGD)combined with manual adjustment achieves a better effect than adaptative learning rates.False78. An AI processor is also referred to as an AI accelerator. It is a functional module used toprocess a large quantity of computing tasks in an AI application. Ascend 910 is typicalproduct.True79. Federated learning can protect user privacy to some extent.True80. In the convolutional layer, the dropout ration is the ratio of features whose values are set to81. Generally, the dropout ratio ranges from 0.2 to 0.5.True82. As the compute core of an Ascend AI Processor, the AI Core is responsible for complexmatrix computation.True83. A decision tree selects a label from features of the trining data to work as the node splittingstandard. Different label selection standards generate different decision tree algorithms.False84. In a convolutional neural network, the size of each kernel is not necessarily the same as thatof a pooling layer window, but their steps should be the same.