A machine learning (ML) specialist needs to extract embedding vectors from a text series. The goal is to provide a ready-to-ingest feature space for a data scientist to develop downstream ML predictive models. The text consists of curated sentences in English. Many sentences use similar words but in different contexts. There are questions and answers among the sentences, and the embedding space must differentiate between them. Which options can produce the required embedding vectors that capture word context and sequential QA information? (Choose two.) A. Amazon SageMaker seq2seq algorithm B. Amazon SageMaker BlazingText algorithm in Skip-gram mode C. Amazon SageMaker Object2Vec algorithm D. Amazon SageMaker BlazingText algorithm in continuous bag-of-words (CBOW) mode E. Combination of the Amazon SageMaker BlazingText algorithm in Batch Skip-gram mode with a custom recurrent neural network (RNN)  Suggested Answer: AC Community Answer: AC Reference: https://aws.amazon.com/blogs/machine-learning/create-a-word-pronunciation-sequence-to-sequence-model-using-amazon-sagemaker/ https://docs.aws.amazon.com/sagemaker/latest/dg/object2vec.html This question is in MLS-C01 AWS Certified Machine Learning – Specialty Exam For getting AWS Certified Machine Learning – Specialty Certificate Disclaimers: The website is not related to, affiliated with, endorsed or authorized by Amazon. Trademarks, certification & product names are used for reference only and belong to Amazon. The website does not contain actual questions and answers from Amazon's Certification Exam.
Please login or Register to submit your answer