Documents on this website are being reviewed and updated as necessary to comply with President Trump's executive orders.

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Caregiver AI Challenge Judging Criteria: Track 1, Phase 1

All applications submitted in each phase of Track 1, AI Tools to Support Caregivers, will be evaluated by a diverse panel of experts with competencies in AI, usability engineering, caregiving, qualitative sciences, and home-based care. The evaluation team will include both governmental and non-governmental representatives.

Judges will evaluate the extent to which applicants meet the following requirements.  

Responsiveness to Need

  • Understanding of the Need: To what extent does the proposal clearly describe the specific problem(s) or caregiver challenge(s) being addressed? How significant is the problem or challenge identified to the caregiver experience?
  • Responsiveness to Need: How well does the proposal describe how the AI-enabled tool is a viable solution that will address the specific caregiver problem or challenge identified?
  • Impact: To what extent will the proposed solution reduce burden, improve caregiver well-being, or extend the caregiving workforce?

User-Centered

  • Caregiver Input: To what extent is the proposed solution based on input from caregivers and, as appropriate, the care recipient, and their experiences? Is there traceability between user insights and design decisions?
  • Co-Implementation: How well does the proposal describe how caregivers and, as appropriate, the care recipient, will be involved early and continuously in the implementation process and in informing adaptations to improve the solution or AI-enabled tool?

Implementation

  • Deployment Readiness and Feasibility: To what extent does the approach to implementation demonstrate a clear, actionable plan for real-world use of the AI-enabled tool and integration into caregiving environments? To what extent is the approach to implementation feasible in terms of resources and viability?
  • Timeline: How well does the proposal detail a reasonable and realistic time frame for testing (Phase 2) in 2026-2027 and another year for implementation (Phase 3)?
  • Metrics: How credible is the plan to measure performance of the AI-enabled tool(s) in the caregiving environment? Does the plan incorporate industry best practices for testing AI solutions, such as Fast Healthcare Interoperability Resources?
  • Evaluation and Adaptation: How reasonable is the plan for utilizing the performance results to identify and adjust where needed?

Usability and Integration

  • User Error Reduction: Is the proposed plan set up with the goal of designing to prevent user error as opposed to only improving efficiency? In other words, is the user interface designed to eliminate or reduce use-related risks, or does risk mitigation rely heavily on protective measures or labeling?
  • Transparency: Does the design support transparency such that the user understands what the system is doing, why, and what will (or should) happen next?
  • Empowerment: Does the tool output interpreted by the caregiver support human judgment as opposed to replacing it?
  • Usability: Will the tool be designed for and tested in a realistic environment as opposed to ideal conditions?
  • Integration: To what extent does the narrative clearly describe how AI will be integrated to support caregiving in the home?
  • Interoperability: As applicable, to what extent does the proposal address interoperability of the AI tool with EMRs, health data systems, or other home-based systems, devices, or software?

Alignment With the Caregiver Challenge AI Principles

  • Protect privacy, dignity, and choice: To what extent does the AI solution protect personal privacy, enable data portability, and respect dignity? Does the solution have clear limits on what information is collected, how it is used, and who can see it? Are there mechanisms for the care recipient to understand how the data is being used and control who has access?
  • Support human-in-the-loop accountability: Does the solution incorporate human-in-the-loop accountability? Does it clearly demonstrate its reasoning to the user and note when the results are based on weak data? Is the user able to correct and adjust the outputs of the solution?
  • Support caregiver well-being and burden reduction: To what extent does the solution reduce caregiver burden, stress, and time demands while supporting caregivers’ physical and mental well-being? Will the solution fit smoothly into daily care routines without creating additional work?
  • Supplement, but not replace, human connection: To what extent does the solution support the caregiver to focus more of their time on human connection?
  • Allow personalized and flexible care: To what extent is the solution customizable to match an individual’s needs, preferences, and lifestyles? How well does the solution address complexity in care needs?
  • Promote safety, reliability, and transparency: To what extent is the performance and decision-making of the AI tool transparent? Does the solution have safeguards to avoid bias and adverse impacts on caregivers and care recipients? How well does the tool reflect current evidence and best practices, and how effectively does it strengthen safety protections for people receiving care?
  • Affordability and access: To what extent is affordability incorporated into the design of the solution, with transparent and reasonable costs that are assessed during development or testing to ensure accessibility for caregivers and care recipients?

Partnerships and Collaboration

  • To what extent does the approach include collaboration with individuals or groups (e.g., care recipients, providers, aging and disability groups, health networks, etc.) throughout the entire process?
  • Will the project forge partnerships or identify stakeholders that could help support the proposed project? The purpose of such partnerships or stakeholder involvement could include, but is not limited to, financial support, operations and maintenance, design consultation, or AI education.


Disclaimer: The judging criteria are used solely by ACL staff and their designated evaluation team to assess relevance, potential impact, and feasibility of submitted proposals. ACL does not intend to evaluate proposals in this competition against regulatory standards, requirements, or policies administered by the FDA. No formal determinations of safety or effectiveness will be conducted by ACL as part of this competition. Any assessments made by ACL in connection with this competition do not constitute and should not be interpreted as FDA determinations. Depending on the intended use of a proposed AI tool, engagement with the FDA may be helpful or needed.

 


Last modified on 02/09/2026


Back to Top