Experts from Google, T-Mobile and other tech frontiers weigh in on the future of AI

Experts from Google, T-Mobile and other tech frontiers weigh in on the future of AI

11:30pm, 25th April, 2019
SalesPal CEO Ashvin Naik, Google Cloud’s Chanchal Chatterjee, Audioburst’s Rachel Batish and T-Mobile’s Chip Reno discuss the future of artificial intelligence at the Global AI Conference in Seattle. (GeekWire Photo / Alan Boyle) Artificial intelligence can rev up recommendation engines and make self-driving cars safer. It can even . But what else will it be able to do? At today’s session of the , a panel of techies took a look at the state of AI applications — and glimpsed into their crystal balls to speculate about the future of artificial intelligence. The panelists included Chanchal Chatterjee, AI leader at ; Ashvin Naik, CEO of , which markets AI-enabled sales analysis tools; Rachel Batish, vice president of product for , an audio indexing service; and Chip Reno, senior advanced analytics manager at . The moderator was Shailesh Manjrekar, head of product and solutions marketing for , a multi-cloud data storage and management company. Here are five AI frontiers that came up in today’s conversations, plus a couple of caveats to keep in mind: Smarter grocery stores: AI-enabled grocery shopping was pioneered right here in Seattle at , but the trend is catching on. Today called the Intelligent Retail Lab in Levittown, N.Y. Britain’s takes a different tack: Users fill up a virtual shopping cart, then schedule a one-hour delivery slot. Google Cloud helped Ocado develop the , including a recommendation engine that figures out customers’ shifting preferences, an algorithm that handles and prioritizes customer service emails, and a as Ocado’s previous system. Energy-saving server farms: Chatterjee pointed to how Google used its DeepMind machine learning platform to . Before AI was put on the case, 10 years’ worth of efficiency measures could reduce energy usage by merely 12 percent, he said. Within six months, AI brought about a 40 percent reduction. “That was a huge difference that AI made in a very short amount of time that we could not do with 10 years of research,” Chatterjee said. Financial market prediction: Hedge fund managers and bankers are already , detect market manipulation and assess credit risks. But Chatterjee said the models are getting increasingly sophisticated. AI is being used to predict how margin trades could play out, or whether undervalued financial assets are ripe for the picking. AI models could even anticipate . “When the lock-in period expires … that’s a great time to short,” Chatterjee said. Deeper, wider AI conversations: Chatterjee predicted that our conversations with voice assistants are likely to get wider, deeper and more personal as AI assistants become smarter. Audioburst’s Batish said conversational AI could provide a wider opening for smaller-scale startups and for women in tech. “Women are very much prominent in conversational applications and businesses,” she said. Salespal’s Naik agreed with that view — but he worried about the dearth of compelling applications, based on his own company’s experience with voice-enabled devices like Amazon Echo and Google Home. “They’re gathering dust. … We use them just to listen to music or set up alarms. That’s it,” he said. AI for good, or evil? Chatterjee said AI could be a powerful tool to root out fraud and corruption. AI applications could be built “to see what influence relationships have on outcomes — that tells you if there are any side deals being made,” he said. But Batish worried about the rise of , virtual and . “I’m actually afraid of what that could bring into our world,” she said. “It would be interesting to see how companies are trying to be able to monitor or identify fake situations that are being built out of very complicated AI.” Watch out for job disruption: Many studies have pointed out that automation is likely to disrupt employment sectors, especially in the service, manufacturing and transportation sectors. “Anything that is repetitive, that can be extracted from multiple sources, that doesn’t have a lot of creativity amd innovation, is at risk due to AI,” Chatterjee said. “That means that more people will have to move into other sectors.” Watch out for the hype: “I’d like to see people get away from the hype a little bit,” T-Mobile’s Reno said. “I’m on the client side, so I see all the pitches involving AI and ML or deep learning. … A lot of times, AI is not applicable to certain use cases where we’re applying it. Just good old-fashioned statistics or business intelligence is fine. So I think that the future of AI relies on getting past the hype and getting more into aligning these awesome tools and algorithms to specific business cases.”
Who’ll serve as AI’s watchdog? Experts trade suggestions at AI2 policy workshop

Who’ll serve as AI’s watchdog? Experts trade suggestions at AI2 policy workshop

8:40pm, 7th March, 2019
Seattle University’s Tracy Kosa, the University of Maryland’s Ben Shneiderman and Rice University’s Moshe Vardi take questions during an AI policy workshop at the Allen Institute for Artificial Intelligence, moderated by AI2 CEO Oren Etzioni. (GeekWire Photo / Alan Boyle) Do we need a National Algorithm Safety Board? How about licensing the software developers who work on critical artificial intelligence platforms? Who should take the lead when it comes to regulating AI? Or does AI need regulation at all? The future of AI and automation, and the policies governing how far those technologies go, took center stage today during a policy workshop presented by Seattle’s Allen Institute for Artificial Intelligence, or AI2. And the experts who spoke agreed on at least one thing: Something needs to be done, policy-wise. “Technology is driving the future — the question is, who is doing the steering?” said Moshe Vardi, a Rice University professor who focuses on computational engineering and the social impact of automation. Artificial intelligence is already sparking paradigm shifts in the regulatory sphere: For example, when a Tesla car owner was killed in a 2016 highway collision, the National Transportation Safety Board at the company’s self-driving software. (And there have been such for the NTSB to investigate since then.) The NTSB, which is an , may be a useful model for a future federal AI watchdog, said Ben Shneiderman, a computer science professor at the University of Maryland at College Park. Just as the NTSB determines where things go wrong in the nation’s transportation system, independent safety experts operating under a federal mandate could analyze algorithmic failures and recommend remedies. One of the prerequisites for such a system would be the ability to follow an audit trail. “A flight data recorder for every robot, a flight data recorder for every algorithm,” Shneiderman said. He acknowledged that a National Algorithm Safety Board may not work exactly like the NTSB. It may take the form of a “SWAT team” that’s savvy about algorithms and joins in investigations conducted by other agencies, in sectors ranging from health care to highway safety to financial markets and consumer protection. Ben Shneiderman, a computer science professor at the University of Maryland at College Park, says the National Transportation Safety Board could provide a model for regulatory oversight of algorithms that have significant societal impact. (GeekWire Photo / Alan Boyle)) What about the flood of disinformation and fakery that AI could enable? That might conceivably fall under the purview of the Federal Communications Commission — if it weren’t for the fact that a provision in the 1996 Communications Decency Act, known as , absolves platforms like Facebook (and, say, your internet service provider) from responsibility for the content that’s transmitted. “Maybe we need a way to just change [Section] 230, or maybe we need a fresh interpretation,” Shneiderman said. Ryan Calo, a law professor at the University of Washington who focuses on AI policy, noted that the Trump administration isn’t likely to go along with increased oversight of the tech industry. But he said state and local governments could play a key role in overseeing potentially controversial uses of AI. Seattle, for example, that requires agencies to take a hard look at surveillance technologies before they’re approved for use. Another leader in the field is New York City, which has to monitor how algorithms are being used. Determining the lines of responsibility, accountability and liability will be essential. Seattle University law professor Tracy Kosa went so far as to suggest that software developers should be subject to professional licensing, just like doctors and lawyers. “The goal isn’t to change what’s happening with technology, it’s about changing the people who are building it, the same way that the Hippocratic Oath changed the way medicine was practiced.” she said. The issues laid out today sparked a lot of buzz among the software developers and researchers at the workshop, but Shneiderman bemoaned the fact that such issues haven’t yet gained a lot traction in D.C. policy circles. That may soon change, however, due to AI’s rapid rise. “It’s time to grow up and say who does what by when,” Shneiderman said. Odds and ends from the workshop: Vardi noted that there’s been a lot of talk about ethical practices in AI, but he worried that focusing on ethics was “almost a ruse” on the part of the tech industry. “If we talk about ethics, we don’t have to talk about regulation,” he explained. Calo worried about references to an “AI race” or use of the term by the White House. “This is not only poisonous and factually ridiculous … it leads to bad policy choices,” Calo said. Such rhetoric fails to recognize the international character of the AI research community, he said. Speaking of words, Shneiderman said the way that AI is described can make a big difference in public acceptance. For example, terms such as “Autopilot” and “self-driving cars” may raise unrealistic expectations, while terms such as “adaptive cruise control” and “active parking assist” make it clear that human drivers are still in charge. Over the course of the day, the speakers provided a mini-reading list on AI policy issues: by Shoshana Zuboff; by Cathy O’Neil; a white paper distributed by IEEE; and an oldie but goodie by Charles Perrow.