Documents on this website are being reviewed and updated as necessary to comply with President Trump's executive orders.

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Caregiver AI Challenge Judging Criteria: Track 2, Phase 1

All applications submitted in each phase of Track 2, AI Tools for Extending the Caregiver Workforce, will be evaluated by a diverse panel of experts with competencies in AI, usability engineering, caregiving, qualitative sciences, and home-based care. The evaluation team will include both governmental and non-governmental representatives.

Judges will evaluate the extent to which applicants meet the following requirements.  

Responsiveness to Need

  • Understanding of the Need: To what extent does the proposal clearly describe the specific workforce challenges being addressed? How significant are the challenges identified to the system?
  • Responsiveness to Need: How well does the proposal describe how the AI solution is viable and will solve the specific workforce challenge identified?
  • Impact: To what extent does the proposed solution suggest the most promising impact on reducing burden, creating efficiencies, improving direct care worker well-being, or extending the workforce?

User-Centered

  • User Input: To what extent is the proposed solution based on end-user input (e.g., reflects the needs and realities of direct care workers, supervisors, and care organizations)? Is there traceability between user insights and design decisions?
  • Co-Implementation: How well does the proposal describe how organizations and others (e.g., direct care workers, care recipients) will be involved early and continuously in the implementation process and in informing adaptations to improve the solution or AI tool?

Implementation

  • Deployment Readiness: To what extent does the approach to implementation demonstrate a clear, actionable plan for real-world use of the AI-enabled tool and integration into organizational environments? To what extent is the approach to implementation feasible in terms of resources, viability, and operational complexity?
  • Timeline: How well does the proposal detail a reasonable and realistic time frame for testing (Phase 2) in 2026-2027 and another year for implementation (Phase 3)?
  • Testing and Iteration: How strong is the plan for testing the solution, collecting effectiveness metrics, and utilizing user input to iteratively refine the tool?
  • Metrics and Evaluation: How credible is the plan to measure performance of the AI-enabled tool(s) in the organization’s environment? Does the plan incorporate industry best practices for testing AI solutions, such as Fast Healthcare Interoperability Resources?
  • Evaluation and Adaptation: How reasonable is the plan for utilizing the performance results to identify and adjust where needed?

Usability and Integration

  • User-Centered Design: Is the proposed plan set up with the goal of designing to prevent user error as opposed to only improving efficiency? In other words, is the user interface designed to eliminate or reduce use-related risks, or does risk mitigation rely heavily on protective measures or labeling?
  • Transparency: Does the design support transparency such that the user understands what the system is doing, why, and what will (or should) happen next?
  • Empowerment: Does the tool output support human judgment as opposed to replacing it?
  • Usability: Will the tool be designed for and tested in a realistic environment as opposed to ideal conditions?
  • Integration: To what extent does the narrative clearly describe how the AI solution will be integrated into existing operational workflows to create greater efficiencies?
  • Interoperability: As applicable, to what extent does the proposal address interoperability of the AI tool with electronic medical records (EMRs), health data systems, or other home-based systems, devices, or software?

Alignment With the Caregiver Challenge AI Principles

  • Protect privacy, dignity, and choice: To what extent does the AI solution protect personal privacy, enable data portability, and respect dignity? Does the solution have clear limits on what information is collected, how it is used, and who can see it? Are there mechanisms for the direct care worker to understand how the data is being used and control who has access?
  • Support human-in-the-loop accountability: Does the solution incorporate human-in-the-loop accountability? Does it clearly demonstrate its reasoning to the user and note when the results are based on weak data? Is the user able to correct and adjust the outputs of the solution?
  • Support caregiver well-being and burden reduction: To what extent does the solution reduce direct care worker burden, stress, and time demands, while supporting physical and mental well-being? Will the solution fit smoothly into daily administrative activities without creating additional work?
  • Supplement, but not replace, human connection: To what extent does the solution support direct care workers to focus more of their time on human connection?
  • Allow personalized and flexible care: To what extent is the solution customizable to reflect the individualized needs, preferences, and lifestyles of direct care workers and care recipients? How well does the solution handle complexity in care needs?
  • Promote safety, reliability, and transparency: To what extent is the performance and decision-making of the AI tool transparent? Does the solution have safeguards to avoid bias and adverse impacts? How well does the tool reflect current evidence and best practices, and how effectively does it strengthen safety protections?
  • Affordability and access: To what extent is affordability incorporated into the design of the solution, with transparent and reasonable costs that are assessed during development or testing?

Partnerships and Collaboration

  • To what extent does the proposal demonstrate collaboration with relevant stakeholders (e.g., direct care workers, state agencies, workforce agencies, community groups) throughout design, testing, and implementation?
  • To what extent does the project identify partnerships that support sustainability, such as for operations, maintenance, workforce training, or responsible AI governance?


Disclaimer: The judging criteria are used solely by ACL staff and their designated evaluation team to assess relevance, potential impact, and feasibility of submitted proposals. ACL does not intend to evaluate proposals in this competition against regulatory standards, requirements, or policies administered by the FDA. No formal determinations of safety or effectiveness will be conducted by ACL as part of this competition. Any assessments made by ACL in connection with this competition do not constitute and should not be interpreted as FDA determinations. Depending on the intended use of a proposed AI tool, engagement with the FDA may be helpful or needed.

 


Last modified on 02/05/2026


Back to Top