Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Overview -
General Overview -
Contoso, Ltd. is an international accounting company that has offices in France, Portugal, and the United Kingdom.
Contoso has a professional services department that contains the roles shown in the following table.
Answer :
Explanation:
Box 1: Cognitive Service User -
Ensure that the members of a group named Management-Accountants can approve the FAQs.
Approve=publish.
Cognitive Service User (read/write/publish): API permissions: All access to Cognitive Services resource except for ability to:
1. Add new members to roles.
2. Create new resources.
Box 2: Cognitive Services QnA Maker Editor
Ensure that the members of a group named Consultant-Accountants can create and amend the FAQs.
QnA Maker Editor: API permissions:
1. Create KB API
2. Update KB API
3. Replace KB API
4. Replace Alterations
5. "Train API" [in new service model v5]
Box 3: Cognitive Services QnA Maker Read
Ensure that the members of a group named the Agent-CustomerServices can browse the FAQs.
QnA Maker Read: API Permissions:
1. Download KB API
2. List KBs for user API
3. Get Knowledge base details
4. Download Alterations
Generate Answer -
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/role-based-access-control
HOTSPOT -
You are developing an application that will use the Computer Vision client library. The application has the following code.
Answer :
Explanation:
Box 1: No -
Box 2: Yes -
The ComputerVision.analyzeImageInStreamAsync operation extracts a rich set of visual features based on the image content.
Box 3: No -
Images will be read from a stream.
Reference:
https://docs.microsoft.com/en-us/java/api/com.microsoft.azure.cognitiveservices.vision.computervision.computervision.analyzeimageinstreamasync
You are developing a method that uses the Computer Vision client library. The method will perform optical character recognition (OCR) in images. The method has the following code.
Answer : BD
Explanation:
Example code :
do
{
results = await client.GetReadResultAsync(Guid.Parse(operationId));
}
while ((results.Status == OperationStatusCodes.Running ||
results.Status == OperationStatusCodes.NotStarted));
Reference:
https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/ComputerVisionQuickstart.cs
HOTSPOT -
You have a Computer Vision resource named contoso1 that is hosted in the West US Azure region.
You need to use contoso1 to make a different size of a product photo by using the smart cropping feature.
How should you complete the API URL? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer :
Reference:
https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-generating-thumbnails#examples
DRAG DROP -
You are developing a webpage that will use the Azure Video Analyzer for Media (previously Video Indexer) service to display videos of internal company meetings.
You embed the Player widget and the Cognitive Insights widget into the page.
You need to configure the widgets to meet the following requirements:
✑ Ensure that users can search for keywords.
✑ Display the names and faces of people in the video.
✑ Show captions in the video in English (United States).
How should you complete the URL for each widget? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Answer :
Reference:
https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-for-media-docs/video-indexer-embed-widgets
DRAG DROP -
You train a Custom Vision model to identify a companyג€™s products by using the Retail domain.
You plan to deploy the model as part of an app for Android phones.
You need to prepare the model for deployment.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Answer :
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/export-your-model
HOTSPOT -
You are developing an application to recognize employeesג€™ faces by using the Face Recognition API. Images of the faces will be accessible from a URI endpoint.
The application has the following code.
Answer :
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/use-persondirectory
DRAG DROP -
You have a Custom Vision resource named acvdev in a development environment.
You have a Custom Vision resource named acvprod in a production environment.
In acvdev, you build an object detection model named obj1 in a project named proj1.
You need to move obj1 to acvprod.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Answer :
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/copy-move-projects
DRAG DROP -
You are developing an application that will recognize faults in components produced on a factory production line. The components are specific to your business.
You need to use the Custom Vision API to help detect common faults.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Answer :
Explanation:
Step 1: Create a project -
Create a new project.
Step 2: Upload and tag the images
Choose training images. Then upload and tag the images.
Step 3: Train the classifier model.
Train the classifier -
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/getting-started-build-a-classifier
HOTSPOT -
You are building a model that will be used in an iOS app.
You have images of cats and dogs. Each image contains either a cat or a dog.
You need to use the Custom Vision service to detect whether the images is of a cat or a dog.
How should you configure the project in the Custom Vision portal? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Answer :
Explanation:
Box 1: Classification -
Incorrect Answers:
An object detection project is for detecting which objects, if any, from a set of candidates are present in an image.
Box 2: Multiclass -
A multiclass classification project is for classifying images into a set of tags, or target labels. An image can be assigned to one tag only.
Incorrect Answers:
A multilabel classification project is similar, but each image can have multiple tags assigned to it.
Box 3: General -
General: Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains.
Reference:
https://cran.r-project.org/web/packages/AzureVision/vignettes/customvision.html
You have an Azure Video Analyzer for Media (previously Video Indexer) service that is used to provide a search interface over company videos on your companyג€™s website.
You need to be able to search for videos based on who is present in the video.
What should you do?
Answer : A
Explanation:
Video Indexer supports multiple Person models per account. Once a model is created, you can use it by providing the model ID of a specific Person model when uploading/indexing or reindexing a video. Training a new face for a video updates the specific custom model that the video was associated with.
Note: Video Indexer supports face detection and celebrity recognition for video content. The celebrity recognition feature covers about one million faces based on commonly requested data source such as IMDB, Wikipedia, and top LinkedIn influencers. Faces that aren't recognized by the celebrity recognition feature are detected but left unnamed. Once you label a face with a name, the face and name get added to your account's Person model. Video Indexer will then recognize this face in your future videos and past videos.
Reference:
https://docs.microsoft.com/en-us/azure/media-services/video-indexer/customize-person-model-with-api
You use the Custom Vision service to build a classifier.
After training is complete, you need to evaluate the classifier.
Which two metrics are available for review? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
Answer : AD
Explanation:
Custom Vision provides three metrics regarding the performance of your model: precision, recall, and AP.
Reference:
https://www.tallan.com/blog/2020/05/19/azure-custom-vision/
DRAG DROP -
You are developing a call to the Face API. The call must find similar faces from an existing list named employeefaces. The employeefaces list contains 60,000 images.
How should you complete the body of the HTTP request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Answer :
Explanation:
Box 1: LargeFaceListID -
LargeFaceList: Add a face to a specified large face list, up to 1,000,000 faces.
Note: Given query face's faceId, to search the similar-looking faces from a faceId array, a face list or a large face list. A "faceListId" is created by FaceList - Create containing persistedFaceIds that will not expire. And a "largeFaceListId" is created by LargeFaceList - Create containing persistedFaceIds that will also not expire.
Incorrect Answers:
Not "faceListId": Add a face to a specified face list, up to 1,000 faces.
Box 2: matchFace -
Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.
Reference:
https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar
DRAG DROP -
You are developing a photo application that will find photos of a person based on a sample image by using the Face API.
You need to create a POST request to find the photos.
How should you complete the request? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all.
You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Answer :
Explanation:
Box 1: detect -
Face - Detect With Url: Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.
POST {Endpoint}/face/v1.0/detect
Box 2: matchPerson -
Find similar has two working modes, "matchPerson" and "matchFace". "matchPerson" is the default mode that it tries to find faces of the same person as possible by using internal same-person thresholds. It is useful to find a known person's other photos. Note that an empty list will be returned if no faces pass the internal thresholds. "matchFace" mode ignores same-person thresholds and returns ranked similar faces anyway, even the similarity is low. It can be used in the cases like searching celebrity-looking faces.
Reference:
https://docs.microsoft.com/en-us/rest/api/faceapi/face/detectwithurl https://docs.microsoft.com/en-us/rest/api/faceapi/face/findsimilar
HOTSPOT -
You develop a test method to verify the results retrieved from a call to the Computer Vision API. The call is used to analyze the existence of company logos in images. The call returns a collection of brands named brands.
You have the following code segment.
Answer :
Explanation:
Box 1: Yes -
Box 2: Yes -
Coordinates of a rectangle in the API refer to the top left corner.
Box 3: No -
Reference:
https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-brand-detection
Have any questions or issues ? Please dont hesitate to contact us