DARPA's XAI Explainable Artificial Intelligence Future
Politics / AI May 15, 2018 - 02:00 PM GMTThe popular scenario has AI deploying autonomous killer military robots as storm troopers. The mission of DARPA is to create the cutting edge of weaponized technology. So when a report contends that the Pentagon now using Jade Helm exercises to teach Skynet how to kill humans, it is not simply a screenplay for a Hollywood blockbuster.
"Simply AI quantum computing technology that can produce the holographic battle simulations and, in addition, "has the ability to use vast amounts of data being collected on the human domain to generate human terrain systems in geographic population centric locations" as a means of identifying and eliminating targets - insurgents, rebels or "whatever labels that can be flagged as targets in a Global Information Grid for Network Centric Warfare environments."
While this assessment may alarm the most fearful, Steven Walker, director of the Defense Advanced Research Projects Agency in DARPA: Next-generation artificial intelligence in the works, presents a far more sedated viewpoint.
"Walker described the current generation of AI as its “second wave,” which has led to breakthroughs like autonomous vehicles. By comparison, “first wave” applications, like tax preparation software, follow simple logic rules and are widely used in consumer technology.
While second-wave AI technology has the potential to, for example, control the use of the electromagnetic spectrum on the battlefield, Walker said the tools aren’t flexible enough to adapt to new inputs.
The third wave of AI will rely on contextual adaptation — having a computer or machine understand the context of the environment it’s working in, and being able to learn and adapt based on changes in that environment."
Here is where the XAI model comes into play. The authoritative publication Janes states that DARPA’s XAI seeks explanations from autonomous systems. "According to DARPA, XAI aims to “produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately, trust, and effectively manage the emerging generation of artificially intelligent partners”.
Mr. David Gunning provides an insight that Explainable Artificial Intelligence (XAI) is the next development.
"XAI is one of a handful of current DARPA programs expected to enable “third-wave AI systems”, where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena.
The XAI program is focused on the development of multiple systems by addressing challenge problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions. These two challenge problem areas were chosen to represent the intersection of two important machine learning approaches (classification and reinforcement learning) and two important operational problem areas for the DoD (intelligence analysis and autonomous systems)."
The FedBizOpps government site provides this synopsis: "The goal of Explainable AI (XAI) is to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of AI systems."
The private sector is involved in these developments. The question that gets lost involves national security since Xerox is being bought by Fujifilm. But why worry over such mere details when the machines are on a path to become self-directed networks.
"PARC, a Xerox company, today announced it has been selected by the Defense Advanced Research Projects Agency (DARPA), under its Explainable Artificial Intelligence (XAI) program, to help advance the underlying science of AI. For this multi-million dollar contract, PARC will aim to develop a highly interactive sense-making system called COGLE (COmmon Ground Learning and Explanation), which may explain the learned performance capabilities of autonomous systems to human users."
With the news that the Xerox sale to Fuji is called off, could the PARC component of this deal be a breaker?
As for trusting the results of the technology, just ask the machine. It will tell the human user what to believe. Another firm that is involved with XAI is Charles River Analytics. The stated objective is to overcome the current limitation from the human interface. "The Department of Defense (DoD) is investigating the concept that XAI -- especially explainable machine learning -- will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners."
The Defense Department is developing a New project wants AI to explain itself.
"Explainable Artificial Intelligence (XAI), which looks to create tools that allow a human on the receiving end of information or a decision from an AI machine to understand the reasoning that produced it. In essence, the machine needs to explain its thinking.
More recent efforts have employed new techniques such as complex algorithms, probabilistic graphical models, deep learning neural networks and other methods that have proved to be more effective but, because their models are based on the machines’ own internal representations, are less explainable.
The Air Force, for example, recently awarded SRA International a contract to focus specifically on the trust issues associated with autonomous systems."
It would be a mistake to equate an AI system to just an advanced auto pilot device to navigate an aircraft. While the outward description of an objective to create an AI communication with human interface sounds reassuring, the actual risk of generating an entirely independent computerized decision structure is being mostly ignored.
Just look at the dangerous use of AI at Facebook. AI Is Inventing Languages Humans Can’t Understand. Should We Stop It? Can DARPA be confident that they can control a self-generation and thinking Artificial Intelligence entity that may very well see a human component unnecessary? Imagine a future combat regiment that see their commanding officer as inferior to the barking of a drill sergeant computer terminal? In such an environment, where would a General Douglas MacArthur fit in?
XAI is an overly optimistic belief that humans can always pull the plug on a rogue machine. Well, such a conviction needs to be approved by the AI cloud computer.
SARTRE
Source: http://batr.org/utopia/051518.html
Discuss or comment about this essay on the BATR Forum
"Many seek to become a Syndicated Columnist, while the few strive to be a Vindicated Publisher"
© 2018 Copyright BATR - All Rights Reserved
Disclaimer: The above is a matter of opinion provided for general information purposes only and is not intended as investment advice. Information and analysis above are derived from sources and utilising methods believed to be reliable, but we cannot accept responsibility for any losses you may incur as a result of this analysis. Individuals should consult with their personal financial advisors
BATR Archive |
© 2005-2022 http://www.MarketOracle.co.uk - The Market Oracle is a FREE Daily Financial Markets Analysis & Forecasting online publication.