|Full Exam Name||Data Science Essentials|
|Certification Name||Cloudera Certified Professional|
♥ 2019 Valid DS-200 Exam Questions ♥
DS-200 exam questions, DS-200 PDF dumps; DS-200 exam dumps: DS-200 Exam Questions (60 Q&A) (New Questions Are 100% Available! Also Free Practice Test Software!)
Latest and Most Accurate Cloudera DS-200 Exam Questions:
Each node in your Hadoop cluster, running YARN, has 64GB memory and 24 cores. Your yarn.site.xml has the following configuration:
You want YARN to launch no more than 16 containers per node. What should you do?
A. Modify yarn-site.xml with the following property:
B. Modify yarn-sites.xml with the following property:
C. Modify yarn-site.xml with the following property:
D. No action is needed: YARN’s dynamic resource allocation automatically optimizes the node memory and cores
You want to node to only swap Hadoop daemon data from RAM to disk when absolutely necessary. What should you do?
A. Delete the /dev/vmswap file on the node
B. Delete the /etc/swap file on the node
C. Set the ram.swap parameter to 0 in core-site.xml
D. Set vm.swapfile file on the node
E. Delete the /swapfile file on the node
You are configuring your cluster to run HDFS and MapReducer v2 (MRv2) on YARN. Which two daemons needs to be installed on your cluster’s master nodes?
You observed that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 1000MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?
A. For a 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O
B. Increase the io.sort.mb to 1GB
C. Decrease the io.sort.mb value to 0
D. Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records.
You are running a Hadoop cluster with a NameNode on host mynamenode, a secondary NameNode on host mysecondarynamenode and several DataNodes.
Which best describes how you determine when the last checkpoint happened?
A. Execute hdfs namenode –report on the command line and look at the Last Checkpoint information
B. Execute hdfs dfsadmin –saveNamespace on the command line which returns to you the last checkpoint value in fstime file
C. Connect to the web UI of the Secondary NameNode (http://mysecondary:50090/) and look at the “Last Checkpoint” information
D. Connect to the web UI of the NameNode (http://mynamenode:50070) and look at the “Last Checkpoint” information
New Updated DS-200 Exam Questions DS-200 PDF dumps DS-200 practice exam dumps: https://www.dumpsschool.com/DS-200-exam-dumps.html