Artificial Intelligence Governance Professional v1.0

Page:    1 / 11   
Exam contains 158 questions

A US company has developed an AI system, CrimeBuster 7909, that collects information about incarcerated individuals that predicts whether someone is likely to commit another crime if released from prison.
When considering expanding to the EU market, this type of technology would:

  • A. Require the company to register the tool with the EU database.
  • B. Require the application of privacy enhancing technologies.
  • C. Be subject to approval by the relevant EU authority.
  • D. Be banned under the EU AI Act.


Answer : C

Which of the following disclosures is NOT required for an EU organization that developed and deployed a high-risk AI system?

  • A. The human oversight measures employed.
  • B. How an individual may contest a decision.
  • C. The location(s) where data is stored.
  • D. The fact that an AI system is being used.


Answer : C

In accordance with the EU AI Act, for how long after a high-risk AI system has been placed on the market must the provider keep the relevant documentations at the disposal of the national competent authorities?

  • A. 10 years.
  • B. 8 years.
  • C. 6 years.
  • D. 5 years.


Answer : A

The OECD’s Ethical AI Governance Framework is a self-regulation model that proposes to prevent societal harms by:

  • A. Establishing explainability criteria to ethically source and use data to train AI systems
  • B. Defining ethical requirements specific to each industry sector and high-risk AI domain.
  • C. Focusing on ethical AI technical design and post-deployment monitoring
  • D. Balancing AI innovation with ethical considerations.


Answer : D

ISO 42001 International Standard offers guidance for organizations to develop trustworthy AI management systems by:

  • A. Requiring specific minimum parameters for key suppliers and key aspects of AI management systems.
  • B. Requiring organizations to continuously improve the effectiveness of their AI management systems.
  • C. Focusing on high-risk aspects of development of AI management systems.
  • D. Explicitly over-riding previously issued and now outdated ISO standards.


Answer : B

What is the main purpose of accountability structures under the Govern function of the NIST AI Risk Management Framework?

  • A. To empower and train appropriate cross-functional teams.
  • B. To establish diverse, equitable and inclusive processes.
  • C. To determine responsibility for allocating budgetary resources.
  • D. To enable and encourage participation by external stakeholders.


Answer : A

The initial pilot effort for NIST’s Assessing Risks and Impacts of AI (ARIA) Program is focused on risks associated with which of the following?

  • A. Large language models.
  • B. Text-to-image models.
  • C. Recommender systems.
  • D. Facial recognition systems.


Answer : A

CASE STUDY -
Please use the following to answer the next question:
A global marketing agency is adapting a large language model (“LLM”) to generate content for an upcoming marketing campaign for a client’s new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface (“API”) developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address AI governance.
The marketing company has:
Entered into a contract with the technology company with suitable representations and warranties.
Completed an impact assessment on the LLM for this intended use.
Built technical guidance on how to measure and mitigate bias in the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Followed applicable regulatory requirements.
Created specific legal statements and disclosures regarding the use of the AI on its client’s advertising.
The technology company has:
Provided guidance and resources to developers to address environmental concerns.
Build technical guidance on how to measure and mitigate bias in the LLM.
Provided tools and resources to measure bias specific to the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Mapped and mitigated potential societal harms and large-scale impacts.
Followed applicable regulatory requirements and industry standards.
Created specific legal statements and disclosures regarding the LLM, including with respect to IP and rights to data.
Which stakeholder is responsible for lawful collection of data for the training of the foundational AI model?

  • A. The marketing agency.
  • B. The tech company.
  • C. The data aggregator.
  • D. The marketing agency’s client.


Answer : C

CASE STUDY -
Please use the following to answer the next question:
A global marketing agency is adapting a large language model (“LLM”) to generate content for an upcoming marketing campaign for a client’s new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface (“API”) developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address AI governance.
The marketing company has:
Entered into a contract with the technology company with suitable representations and warranties.
Completed an impact assessment on the LLM for this intended use.
Built technical guidance on how to measure and mitigate bias in the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Followed applicable regulatory requirements.
Created specific legal statements and disclosures regarding the use of the AI on its client’s advertising.
The technology company has:
Provided guidance and resources to developers to address environmental concerns.
Build technical guidance on how to measure and mitigate bias in the LLM.
Provided tools and resources to measure bias specific to the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Mapped and mitigated potential societal harms and large-scale impacts.
Followed applicable regulatory requirements and industry standards.
Created specific legal statements and disclosures regarding the LLM, including with respect to IP and rights to data.
All of the following results would be considered biased outputs from this AI system EXCEPT:

  • A. The generated ads are sent to construction companies, not individual workers.
  • B. The content generated for minority construction workers is insufficient.
  • C. The images of female workers are hyper-sexualized.
  • D. The advertising text generated for female audiences focuses on color and style.


Answer : A

CASE STUDY -
Please use the following to answer the next question:
A global marketing agency is adapting a large language model (“LLM”) to generate content for an upcoming marketing campaign for a client’s new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface (“API”) developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address AI governance.
The marketing company has:
Entered into a contract with the technology company with suitable representations and warranties.
Completed an impact assessment on the LLM for this intended use.
Built technical guidance on how to measure and mitigate bias in the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Followed applicable regulatory requirements.
Created specific legal statements and disclosures regarding the use of the AI on its client’s advertising.
The technology company has:
Provided guidance and resources to developers to address environmental concerns.
Build technical guidance on how to measure and mitigate bias in the LLM.
Provided tools and resources to measure bias specific to the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Mapped and mitigated potential societal harms and large-scale impacts.
Followed applicable regulatory requirements and industry standards.
Created specific legal statements and disclosures regarding the LLM, including with respect to IP and rights to data.
All of the following should be included in the marketing company’s disclosures about the use of the LLM EXCEPT:

  • A. Intended purpose.
  • B. Proprietary methods.
  • C. Compliance with law.
  • D. Acknowledgement of limitations.


Answer : B

CASE STUDY -
Please use the following to answer the next question:
A global marketing agency is adapting a large language model (“LLM”) to generate content for an upcoming marketing campaign for a client’s new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface (“API”) developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address AI governance.
The marketing company has:
Entered into a contract with the technology company with suitable representations and warranties.
Completed an impact assessment on the LLM for this intended use.
Built technical guidance on how to measure and mitigate bias in the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Followed applicable regulatory requirements.
Created specific legal statements and disclosures regarding the use of the AI on its client’s advertising.
The technology company has:
Provided guidance and resources to developers to address environmental concerns.
Build technical guidance on how to measure and mitigate bias in the LLM.
Provided tools and resources to measure bias specific to the LLM.
Enabled technical aspects of transparency, explainability, robustness and privacy.
Mapped and mitigated potential societal harms and large-scale impacts.
Followed applicable regulatory requirements and industry standards.
Created specific legal statements and disclosures regarding the LLM, including with respect to IP and rights to data.
While the marketing agency took steps to mitigate its risks, the best additional step would be to:

  • A. Negotiate an intellectual property indemnity from the technology company.
  • B. Evaluate the use of AI in the marketing industry to identify best practices.
  • C. Engage a third party to lead the procurement selection process.
  • D. Establish a governance committee to oversee the project.


Answer : D

Which of the following use cases would be best served by a non-AI solution?

  • A. A non-profit wants to develop a social media presence.
  • B. A business analyst wants to develop advertising campaigns.
  • C. An e-commerce provider wants to make personalized recommendations.
  • D. A customer service agency wants to automate answers to common questions.


Answer : B

Training data is best defined as a subset of data that is used to:

  • A. Enable a model to detect and learn patterns.
  • B. Fine-tune a model to improve accuracy and prevent overfitting.
  • C. Detect the initial sources of biases to mitigate prior to deployment.
  • D. Resemble the structure and statistical properties of production data.


Answer : A

Which of the following steps occurs in the design phase of the AI life cycle?

  • A. Data augmentation.
  • B. Model explainability.
  • C. Impact assessment.
  • D. Performance evaluation.


Answer : C

What is most likely the first action that a developer takes to map, plan and scope an AI project?

  • A. Define the business case and perform a cost/benefit analysis answering the question of “why AI?”
  • B. Use a test, evaluation, verification, validation (TEVV) process.
  • C. Perform an algorithmic impact assessment leveraging PIAs.
  • D. Determine feasibility and optionality of redress.


Answer : A

Page:    1 / 11   
Exam contains 158 questions

Talk to us!


Have any questions or issues ? Please dont hesitate to contact us

Certlibrary.com is owned by MBS Tech Limited: Room 1905 Nam Wo Hong Building, 148 Wing Lok Street, Sheung Wan, Hong Kong. Company registration number: 2310926
Certlibrary doesn't offer Real Microsoft Exam Questions. Certlibrary Materials do not contain actual questions and answers from Cisco's Certification Exams.
CFA Institute does not endorse, promote or warrant the accuracy or quality of Certlibrary. CFA® and Chartered Financial Analyst® are registered trademarks owned by CFA Institute.
Terms & Conditions | Privacy Policy