On the dash to the subway one morning, I grabbed an AMNY and proceeded to toss away the advertising insert, as usual. But the headline stopped me from dropping it in the trash: “What’s Next? The Smart Home of Tomorrow.”
I work at THIRTEEN and the day before, I had attended a meeting about digital transformation, innovation and new technology. We’re in the business of broadcast and video, but our discussion touched on all of the new ways content is being delivered, including voice assistants that pair with smart speakers for the home, like Alexa, Google Assistant and Siri. So instead of reading the AMNY news of the day, I combed what the future might bring, according to the sponsored content insert (from NewsDay, for P.C. Richard & Son). Projections of what artificial intelligence will do in our homes were impressive but also made me just a little uncomfortable. Here’s one bullet point that really struck me:
Clothing, furniture fabrics and bedding, all with built-in sensors, will monitor health and well-being with greater frequency and accuracy.
Is this what will deliver more health care to Americans? What does the clothing have embedded in it, and what temperature do you wash it at? The list of improvements soon to become common included refrigerators that track inventory with cameras, which reminded me of how easily devices in the “Internet of things” can be taken over by those who don’t have our best interests in mind: see this creepy NPR report on a Wi-Fi baby monitor that was hacked.
The insert assured that with artificial intelligence (AI), smart products will evolve to become more intuitive, responsive and proactive. Before I slowly acquiesce to AI’s conveniences, I want to know a lot more about it – who is building it, and what are the implications? Are we responsible enough to give up responsibility to machines? What do “the experts” say?
When I got to my desk at THIRTEEN, I searched our station site for artificial intelligence, and found the topic in science episodes, plus interviews with 2020 presidential candidates, business leaders, social scientists and computer experts across THIRTEEN’s news and public affairs programs. Stream episodes, below, and stay informed about our newest programs and episode by signing up for our NewsThirteen weekly newsletter. Artificial intelligence will remain a developing story on public television for years to come.
Access to this video is a
benefit for members through
NOVA: Look Who’s Driving (aired October 23 at 9 p.m., 2019)
As autonomous vehicles are being tested on public roads around the world, the science series NOVA is dedicating an episode to driverless cars on October 23. The series explores innovations and discoveries in science and technology while highlighting the human side of science. Look Who’s Driving examines how self-driving cars work, and has experts weigh in on the daunting challenges ahead, including how to train artificial intelligence to be better than humans at making life-and-death decisions on the road.
The episode is timely, as this October, 400 people in suburban Phoenix, Arizona got exciting news from Waymo, the self-driving car division of Alphabet (the parent company of Google). Early-adopters will soon be testing Waymo’s fully driverless Chrysler Pacifica minivans. Waymo is also conducting further 3D mapping of the streets of Los Angeles, one of most infamous cities for traffic snarls and fender benders.
And just last week, the government of South Korea announced autonomous cars as a national priority. President Moon Jae-in predicted that South Korea will be the first country to put totally autonomous cars on the road by 2027, and car-maker Hyundai is investing $35 billion to achieve the goal.
Facial recognition bias and ethical standards
The Open Mind: Algorithmic Justice (aired January 11, 2019)
As facial recognition is adopted for security cameras and password-enabled accounts, technical errors are critical deal-breakers for users. Joy Buolamwini, the founder of the Algorithmic Justice League at the MIT Media Lab, discovered that products of leading technology companies could not recognize her face, or her gender. Buolamwini is black, and when she used a photo of herself from her popular TEDTalk within the systems of IBM, Microsoft, Google and others, some didn’t detect her image at all, and the ones that did detect the image labeled her as male.
Her MIT thesis “Findings on Gender and Racial Bias in Facial Analysis Technology,” was based on her research of commercial cognitive service technologies and she presented her findings to the Federal Trade Commission. The ultimate effect of facial recognition bias, if unchecked, can be a cycle of computer-generated discrimination. In a half-hour episode of The Open Mind, Buolamwini shares her personal experience and her years of research into artificial technology systems. See The Open Mind episode page for a full transcript.
AI in the Marketplace
Presidential candidate Andrew Yang’s views on AI
Firing Line: Andrew Yang (aired September 29, 2019)
The Democratic 2020 presidential candidate and entrepreneur Andrew Yang joined Firing Line to discuss his growing concerns about robots and artificial intelligence replacing American jobs, and explains his goal to shift the country’s economy toward a new human-centered capitalism. He also details his signature proposal to give every American adult $1,000 each month. See more Firing Line episodes featuring 2020 candidates on Firing Line program page.
How Big Data is Transforming Creative Commerce
PBS NewsHour (segment aired October 17, 2019)
Big data is disrupting nearly every aspect of modern life. Artificial intelligence, which involves machines learning, analyzing and acting upon enormous sets of data, is transforming industries and eliminating certain jobs. But that data can also be used to appeal more directly to what customers want. Special correspondent and Washington Post columnist Catherine Rampell reports, with a focus on how AI is changing the fragrance industry, and how even the adult film industry is using data to make creative decisions. PBS NewsHour airs nightly at 7 p.m. and streams the following day.
Steve Adubato, host of Think Tank with Steve Adubato among other local programs, went on-location in July 2019 to the VOICE Summit at NJIT (New Jersey Institute of Technology), which was attended by approximately 5,000 people in Newark, NJ. Adubado discusses voice technology with leaders in the field, including Amazon, which believes that voice technology is the next major disrupter, and one that can include all ages.
Think Tank with Steve Adubato: Using AI and Voice Technology to Improve Healthcare (aired October 10, 2019)
Cris De Luca, Global Director of Digital Innovation at J&J Innovation, talks about the innovation taking place in the healthcare world and how voice technology and artificial intelligence are playing a role in evolving health tech. De Luca believes that AI will augment the human factor of healthcare, not take it out.
Think Tank with Steve Adubato: The Future of Voice Technology (aired September 21, 2019)
A panel of experts in voice technology talk about the future of emerging media and artificial intelligence and technology’s impact on information accessibility, businesses and people’s everyday lives. Guests include: David Isbitski, Chief Evangelist, Amazon Alexa; Pete Erickson, CEO, Modev & Organizer VOICE Summit; Martine van der Lee, Director, Social Media, KLM Royal Dutch Airlines.
The Politics of Data
FRONTLINE: In the Age of AI (premieres Tuesday, November 5 at 9 p.m.)
From fears about work and privacy to a rivalry between the U.S. and China, FRONTLINE explores the promise and perils of AI. The documentary traces a new industrial revolution that will reshape and disrupt our lives, our jobs and our world, and allow the emergence of the surveillance society. See FRONTLINE episodes here.
GZERO WORLD with Ian Bremmer: Big Brother is Watching You (aired June 19, 2019)
Ian Bremmer, a renowned political scientist, entrepreneur and bestselling author, digs into the politics of data, artificial intelligence, automation, and how all three could make American tech companies look a lot like their Chinese counterparts.
How can government use our data? The episode looks to China, where an actual social credit scoring system discourages “anti-social behavior.” The Chinese government calls it “positive coercion.” Bremmer says there is no national strategy in the U.S. for big data and artificial intelligence, but The National Artificial Intelligence Research and Development Strategic Plan, updated in June 2019, is posted on the Whitehouse.gov site, Artificial Intelligence for the American People.
Guest Amy Webb, CEO of the Future Today Institute and New York University professor, begins her discussion of artificial intelligence with simple examples in our lives, such as autocomplete text in emails and texts. She explains that our personal digital data is primarily consolidating with the companies Apple, Amazon and Google, and this is why all are investing in healthcare, which is another data point that customers can provide. She also discusses how culture influences decisions made in artificial intelligence development.
See more episodes of GZERO WORLD with Ian Bremmer, in which Bremmer shares his perspective on recent global events and interviews the world leaders, experts and newsmakers shaping today’s international politics.