The Joint Artificial Intelligence Center (JAIC) is the Department of Defense’s (DoD) Artificial Intelligence (AI) Center of Excellence that provides a critical mass of expertise to help the Department harness the game-changing power of AI. To help operationally prepare the Department for AI, the JAIC integrates technology development with the requisite policies, knowledge, processes, and relationships to ensure long term success and scalability.
The mission of the JAIC is to transform the DoD by accelerating the delivery and adoption of AI to achieve mission impact at scale. The goal is to use AI to solve large and complex problem sets that span multiple services, then ensure the Services and Components have real-time access to ever-improving libraries of data sets and tools.
In this episode of “The Convergence” we discuss how the JAIC is bringing AI to the Joint Force (and the associated challenges!) with the following panel members:
- Jacqueline Tame, Acting Deputy Director, Chief Performance Officer
- Alka Patel, Head of AI Ethics Policy
- Jane Pinelis, Chief, Testing and Evaluation, Artificial Intelligence/Machine Learning (AI/ML).
The following bullet points highlight the key insights from our interview:
- We have not seen a reorganization of the DoD since the Goldwater–Nichols Act in 1986. AI offers a catalyst for what is next.
- The DoD has a temporal split in how to integrate AI. AI is now ready to start tackling Phase I objectives to alleviate redundant and repetitive work, but legacy processes and cultural barriers remain as obstacles in starting this work.
- Phase II objectives of integrating AI on the battlefield present additional obstacles that are measurable. Getting AI ready requires improved open mindedness at the individual level on what is possible and a willingness to accept risks, improve data readiness, modernize information technology, recruit the requisite talent, and implement the necessary policies.
- Phase II represents AI integration at a level that could redefine what it means to be Joint. Moving from doctrinal definitions and incredible effort to operate jointly to the technical ability to accomplish this at speed and scale.
- The human capital to implement AI at scale requires a diverse workforce and achange in culture.
- We need to broaden our aperture and think about adding psychologists, cognitive behavioral scientists, education and learning experts, and just more analytical thinkers to our AI workforce.
- Changing our culture and messaging is more important than compensation when attracting this type of workforce. Retention requires a culture that encourages professional development and work on side projects (Google 20%).
- Integrating AI on the battlefield will require some level of run time monitoring to identify emergent negative behaviors. This idea is not new as humans are the biggest autonomous systems on the battlefield and sometimes they act in an unethical manner.
- What keeps these experts up at night?
- The failure of DoD to recognize the potential in distributed ledger tech that could solve many current challenges.
- Adversarial AI: beyond an adversary turning “a panda into a toaster.” Visually tricking AI may be a popular discussion point, but it’s a niche problem that is diverting valuable R&D resources from easier and more problematic attack vectors like patching and data poisoning. We need to dedicate our resources to building resiliency into our AI-enabled systems or risk being vulnerable or too late.
- How do we make sure that the tools developed are used by moral agents? Having the right humans operating these systems will become even more important.
Stay tuned to the Mad Scientist Laboratory for our next episode of “The Convergence” — Reading and Leading in the Future with LTC Joe Byerly on 17 December 2020!