[go: up one dir, main page]

0% found this document useful (0 votes)
138 views20 pages

Huawei Assignment 1

The document consists of a series of questions related to artificial intelligence, machine learning, and deep learning concepts, including topics such as driving automation levels, image feature extraction technologies, data quality in AI, and various machine learning algorithms. It also covers neural network training, activation functions, optimizers, and frameworks, along with statements about specific technologies and their functionalities. The questions are designed to assess knowledge in AI and machine learning principles and practices.

Uploaded by

aprilnirma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views20 pages

Huawei Assignment 1

The document consists of a series of questions related to artificial intelligence, machine learning, and deep learning concepts, including topics such as driving automation levels, image feature extraction technologies, data quality in AI, and various machine learning algorithms. It also covers neural network training, activation functions, optimizers, and frameworks, along with statements about specific technologies and their functionalities. The questions are designed to assess knowledge in AI and machine learning principles and practices.

Uploaded by

aprilnirma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

1.

What are the levels of driving automation defined by SAE International, formerly named the Society of
Automotive Engineers (SAE), based on the degree of dependency on the system?

A. L0~L4

B. L0~L5

C. L1~L5

D. L1~L4

2. Which of the following technologies is commonly used for image feature extraction and related research?

A. Convolutional neural network

B. Naive Bayes classification algorithm

C. Long short-term memory (LSTM) network

D. Word2Vec

3. Data is carrier and representation of information. Which of the following statements about data in AI
application is true?

A. Data quality is not relevant to the quality of an AI model.

B. Data quality is important and as it determines the result of the model.

C. Data can be directly imported to a model without preprocessing.

D. Data refers only to that within an Excel file.

4. Consider a scenario where a machine learning algorithm is used to filter spam. According to the definition
of machine learning, which of the following describes the experience E?

A. Spam filtering

B. Accuracy of spam filtering

C. All tagged spam and genuine emails in the past three years

D. Email addresses

5. A computer uses labeled images to learn and determine which images contain apples and which contain
pears. Which of the following types of machine learning is most applicable to this scenario?

A. Supervised learning

B. Unsupervised learning

C. Semi-supervised learning

D. Reinforcement learning

6. Which of the following statements is true about classification models and regression models in machine
learning?

A. For regression problems, the output variables are discrete values. For classification problems, the output
variables are continuous values.

B. The most commonly used indicators for evaluating regression and classification problems are accuracy
and recall rate.

C. There may be overfitting in both regression and classification problems.

D. Logistic regression is a typical regression model.


7. Which of the following points constitute a support vector of the SVM algorithm without considering
regularization terms?

A. Points on the separating hyperplane

B. Farthest points from the separating hyperplane

C. Points closest to the separating hyperplane

D. Points of a certain type

8. Which of the following statements is false about support vector machines (SVMs)?

A. SVMs are classification models. Their basic model is the linear classifier that maximizes the width of the
gap between the two categories in the feature space.

B. SVMs also have a kernel trick, which makes them non-linear classifiers.

C. In the case of linear inseparability, non-linear mapping algorithms are used to convert the linearly
inseparable samples of low-dimensional input space into samples of high-dimensional feature space. In this
way, samples become linearly separable.

D. SVMs only apply to linear classification.

9. Kernel functions allow algorithms to fit the largest hyperplane in a transformed high-dimensional feature
space. Which of the following is not a common kernel function?

A. Linear kernel function

B. Polynomial kernel function

C. Gaussian kernel function

D. Poisson kernel function

10. In 1958, Frank Rosenblatt invented the perceptron algorithm. 10 years later, Marvin Minsky questioned the
perceptron's ability to solve non-linear classification problems, and proposed the question that signed the
death warrant of perceptrons. Which of the following is the question?

A. AND problem

B. OR problem

C. XOR problem

D. XAND problem

11. During neural network training, which of the following values is continuously updated by using the gradient
descent method to minimize the loss function?

A. Hyperparameter

B. Feature

C. Number of samples

D. Parameter

12. The ReLU function is commonly used in deep learning neural networks. Which of the following is the value
range of this function?

A. [0,+∞)

B. [0,1]

C. [-1,1]

D. [-1,0]
13. The sigmoid activation function is monotonic and continuous, has bounded outputs, and makes the
network easy to converge. It was popular in a period of time. However, when the network is deep, what
problems may sigmoid cause?

A. Gradient reduction

B. Vanishing gradient

C. XOR

D. Overfitting

14. Overfitting problems can be avoided through dataset expansion. Which of the following statements is true
about dataset expansion?

A. The larger the dataset, the lower the probability of overfitting.

B. The larger the dataset, the higher the probability of overfitting.

C. The smaller the dataset, the lower the probability of overfitting.

D. The probability of overfitting

15. Which of the following functions can be used to alleviate the vanishing gradient problem?

A. Sigmoid
B. Tanh
C. Softsign
D. ReLU

16. Which of the following is the shape of tensor [[[0,1],[2,3]],[[4,5],[6,7]]]?

A. [3,3,2]
B. [3,2,4]
C. [2,3,4]
D. [2,2,2]

17. Which of the following statements about the running process of the MindArmour subsystem is false?

A. Configuration policies: Define test policies based on threat vectors and trustworthiness certification
requirements and select appropriate test data generation methods.
B. Fuzzing execution: Generate trusted test data randomly based on the model coverage and
configuration policies.
C. Evaluation report: Generate an evaluation report based on built-in or user-defined trustworthiness
metrics.
D. Trustworthiness enhancement: Use preset methods to enhance the trustworthiness of AI models.

18. Which of the following is NOT a complexity feature of AI computing?

A. Mixed precision computing


B. Parallel data and computing
C. Parallel communication and computing
D. Parallel processing of structured and unstructured data

19. On-device execution refers to the execution of the entire graph. It makes full use of the computing power
of the Ascend AI Processor, greatly reducing the interaction overhead and improving the accelerator usage.
Which of the following is false about on-device execution?

A. Challenges to model execution with powerful chip computing power: Memory wall problems, high
interaction overhead, and difficult data supply. Some operations are performed on the host, while
others are performed on the device. The interaction overhead is much greater than the execution
overhead. As a result, the accelerator usage is low.
B. The chip-oriented deep graph optimization is used to reduce synchronization waiting time and
maximize the parallelism degree of "data-computing-communication". The training performance is
equivalent to that of the graph scheduling mode on the host.
C. Challenges to distributed gradient aggregation with powerful chip computing power: When a single
iteration of ResNet-50 takes 20 ms, the central control overhead and communication overhead for
frequent synchronization are generated. The traditional method requires three times of
synchronization to complete AllReduce. The data-driven method automatically performs AllReduce
without control overhead.
D. MindSpore uses gradient-driven adaptive graph optimization to implement decentralized and
autonomous All Reduce. The gradient aggregation step is consistent, and computing and
communication are fully streamlined.

20. Which of the following statements about the Da Vinci architecture is incorrect?

A. A compute unit contains four types of basic compute resources.


B. Control units are responsible for the running of AI Cores.
C. The storage system consists of on-chip storage units of AI Core and corresponding data paths.
D. Transfers data to the L1 buffer through the bus interface unit.

21. Which of the following statements about the L1 buffer is true?

A. Data in the L1 buffer needs to be read outside the AI Core over the bus interface each time.
B. The L1 buffer can permanently retrain data that needs to be reused.
C. The L1 buffer decreases data accesses over the bus and avoids bus congestion.
D. L1 buffer is used to store the initial values in the neural network.

22. In Huawei Cloud EI, which of the following is a one-stop AI development platform that supports large-
volume data preprocessing, semi-automated data labeling, distributed training, automated model building,
and on-demand model deployment across the device, edge, and cloud for machine learning and deep learning,
and helps AI developers quickly build and deploy models and efficiently manage the AI development lifecycle?

A. ModelArts
B. MindSpore
C. MySQL
D. Ascend

23. Which of the following statements about some of the subfields of AI are true?

A. Computer vision is a science that studies how to make computers "see" things.
B. Speech processing is a general term for different speech processing technologies, such as the study of
vocalization, the collection of statistics related to speech signals, speech recognition, machine
synthesis, and speech perception.
C. Natural language processing (NLP) studies how to use computer technology to understand and use
human languages.
D. Autonomous driving does not require speech processing or computer vision.

24. Which of the following are topics of speech processing research?

A. Speech recognition
B. Voice processing
C. Speech wake-up
D. Voiceprint recognition

25. Which of the following statements about datasets are true?

A. A dataset is a collection of data used in machine learning tasks. Each piece of data is called a sample.
B. Events or attributes that reflect the performance or nature of a sample in a particular aspect are
called features.
C. A training set is a dataset used in the training process, where each sample is referred to as a training
sample.
D. Learning (or training) is the process of building a model from data.
26. Which of the following statements about data preprocessing are true?

A. Data cleansing is a process of filling in missing values, as well as detecting and eliminating noise data
and exceptions.
B. The purpose of data dimension reduction is to simplify data attributes and avoid the curse of
dimensionality.
C. The purpose of data standardization is to reduce noise data and improve model accuracy by
standardizing data.
D. Machine learning outputs results through models. Therefore, model training is more important than
data preprocessing.

27. Which of the following statements are true about model parameters and hyperparameters?

A. Models contain both parameters and hyperparameters.


B. Hyperparameters are automatically learned by models.
C. Hyperparameters are manually set.
D. Hyperparameters can be used to control training.

28. Which of the following statements are true about the advantages of the Rectified Linear Unit (ReLU)
activation function?

A. The output is bounded, meaning the training is not easy to diverge.


B. The calculation is simple.
C. The vanishing gradient problem can be effectively alleviated.
D. Dead neurons exist.

29. Which of the following are common activation functions of neural networks?

A. Dropout
B. Sigmoid
C. Tanh
D. Leaky ReLU

30. Which of the following statements are true about the dropout regularization?

A. It is more effective than penalty parameter in deep learning.


B. It is simple to calculate and easy to implement.
C. Its effect is poor with insufficient training data.
D. Generally, it is used only in training.

31. Which of the following are activation functions of deep learning algorithms?

A. Sigmoid
B. ReLU
C. Tanh
D. Sin

32. Which of the following statements are true about commonly used optimizers?

A. The momentum optimizer updates parameters with the same learning rate, while the momentum
coefficient is adjusted with each iteration.
B. The idea behind the Adagrad optimizer is to set different learning rates for different parameters.
C. One drawback of the Adagrad optimizer is that it ends the optimization process too early.
D. The RMSProp optimizer introduces an attenuation coefficient to enable the gradients to attenuate by a
certain proportion in each round.

33. Which of the following comprise the Adam optimizer?

A. Momentum
B. Adagrad
C. RMSProp
D. Nesterov
34. Which of the following frameworks innately support the distributed deep learning framework?

A. TensorFlow
B. MindSpore
C. CNDK
D. MXNet

35. Which of the following are common evaluation metrics for object detection?

A. mAP
B. IOU value
C. ROC
D. BLEU

36. Which of the following network model formats can be used to save training parameters and network
models?

A. Checkpoint
B. MindIR
C. ONNX
D. AIR

37. Which of the following hardware supports MindSpore training?

A. CPU
B. GPU
C. NPU
D. TPU

38. Which of the following components are included in the Atlas 300T Pro training card?

A. Da Vinci AI Cores
B. TaiShan processor cores
C. DDR memory
D. RoCE high-speed port

39. In a neural network based on connectionism, each node can express a specific meaning.

A. True
B. False

40. As the cornerstone of Huawei's full-stack, all-scenario AI solution, Atlas provides modules, boards, and
servers powered by the Ascend AI processor to meet customer demand for computing power in all scenarios.

A. True
B. False

41. ModelArts is a one-stop development platform for AI developers. With large-volume data preprocessing,
semi-automated data labeling, distributed training, automated model building, and on-demand model
deployment across the edge, edge, and cloud, ModelArts helps AI developers quickly build and deploy models
and efficiently manage the AI development lifecycle.

A. True
B. False

42. In terms of ensuring data privacy and security, the federated learning technology uses different data
sources to train models, overcoming the data bottleneck as a result.

A. True
B. False
43. Softmax regression is a generalization of logistic regression and applies only to binary classification.

A. True
B. False

44. Lasso regression is a type of linear regression in which an absolute loss compensation term is added to the
loss function.

A. True
B. False

45. Assuming a dataset contains the areas and prices of 21,613 housing units in a city, you can use a
classification model to predict the prices of other housing units in the city.

A. True
B. False

46. The tanh function can effectively solve the vanishing gradient problem.

A. True
B. False

47. Dropout can only be used in neural networks to avoid overfitting.

A. True
B. False

48. L1 regularization is referred to as weight decay.

A. True
B. False

49. When mindspore.nn.MaxPool2d is used to build a 2D maximum pooling layer, the received data format is
[Height, Width, Channel, Quantity] by default.

A. True
B. False

50. The vision and value of MindSpore is to lower the AI development threshold, and unleash the computing
power of Ascend AI Processors, and empower inclusive AI.

A. True
B. False

51. The MindInsight subsystem of MindSpore discovers model lineage and compares training results through
the collected information about training hyperparameters, datasets, and data augmentation.

A. True
B. False

52. MindSpore can be quickly deployed on the cloud, edge, and mobile phone, improving resource utilization
and privacy protection and enabling developers to focus on developing AI apps.

A. True
B. False

53. Products of Atlas series cover training and inference scenarios, including servers, edge computing devices,
and acceleration modules.

A. True
B. False

54. Huawei Cloud EI provides intelligent twins, AI development platforms, and general AI capabilities.

A. True
B. False
55. The HMS Core supports HarmonyOS and Android, but does not support Windows.

A. True
B. False

56. Compared with CPUs and GPUs, NPUs use synapse weights to integrate storage and compute, improving
operating efficiency.

A. True
B. False

57. The primary purpose of () is to remove unimportant weights from the weight matrix and fine-tune the
network again.

Answer: network pruning

58. The k value in the k-nearest neighbors (k-NN) algorithm is manually set. It is a/an () of a model. (Enter
"parameter" or "hyperparameter".)

Answer: hyperparameter

59. At a convolution layer of a convolutional neural network, assume there are 128 3 x 3 convolution kernels
and the size of the input feature map is 28 x 28 x 64. The depth of a convolution kernel is (). (Enter only digits.)

Answer: 64

60. CNN is an abbreviation for ().

Answer: Convolutional Neural Network

61. Among the MindSpore's Python APIs, mindspore.() defines loss functions, optimizers, and computing units
for constructing networks.

Answer: nn

62. In Huawei Cloud NLP service, the () technology can divide text into separate words as a sequence. For
example, in English text, spaces are natural delimiters between words. (Fill in the blank.)

Answer: word segmentation

You might also like