Does The Future Of Business And Thought Lie In The Hands Of AI?

With the field of artifical intelligence surging, many wonder if we are moving towards a world where humans will be replaced with AI technology. (Shutterstock)
The artist is dead. The accountant is out of a job. Lawyers are being replaced. Architects don’t need to sketch anymore. AI is here and slowly showing its ability to fill the shoes of any profession. As AI technology gets implemented in more avenues, we’re finding the places a machine can’t replace the human mind. The new technology has had a surge in the last year, but it is not new by any means.
In the early 1900s, the “heartless” tin man from Wizard of Oz gave the first depiction of AI, and in 1950 Isaac Asimov’s collection of nine short stories, ‘I, Robot’ was published. In the collection he creates a world with “positronic,” humanlike robots who possess a form of artificial intelligence and wrestle with the moral implications of such technology.
While Asimov created fiction, Alan Turing made predictions. Turing was an English mathematician, computer scientist and logician. He is most famous for developing the first modern computers with the intention of cracking coded German messages during WWII. In Turing’s mind, it made sense for machines to be able to use available information, along with reason, to solve problems and make decisions. A paper written by Turing in the 1950s, ‘Computing Machinery and Intelligence’, broke down how to build intelligent machines and test them. Artificial intelligence wasn’t coined as a term until about 1956 during a Dartmouth conference which gave birth to the field of AI.
From 1974-1990 funding for AI dropped dramatically because the field was not delivering on promises made. IBM brought AI back into relevancy with their program Deep Blue. It was a computer built to play chess. Deep Blue was the first computer to win a game against the reigning Chess World Champion, Gary Kraspov, under regular time controls.
In 2011, the next leap in AI came in the form of a voice on the iPhone 4s. The 4 was a less powerful phone than the 4s, but the most notable difference was the virtual assistant Siri. Apple programmed Siri to listen and decipher people’s questions and requests. It could perform basic software tasks such as setting alarms and using the calculator. Siri was also able to scan the internet for answers to the users’ questions. The program wasn’t perfect but was a step into the future of AI technology.
There are two main types of AI programs: weak and strong. The weak AI programs are only suitable to handle one job such as the chess program and Siri. The chess program is only capable of playing chess. Siri only listens to questions and provides answers.
Strong AI programs are designed to be more human-like. They deal with situations that are more complex and complicated. An example for strong AI would be self-driving programs. These programs don’t only drive, they have to follow traffic laws, be aware of other drivers’ actions, and not mistake a plastic bag floating through the road with a car.
If a weak AI program malfunctions the consequences are minor. The strong AI programs can have catastrophic outcomes when malfunctioning. When Siri misunderstands the user, no one gets hurt. When Tesla’s self-driving AI malfunctions as it did on November 24, 2022, people get hurt. The driver on the San Francisco Bay Bridge had been using his Tesla Model S’ “full self-driving” option when the car’s left blinker turned on, it changed lanes, and came to a stop in the middle of traffic. The eight-car accident resulted in nine people getting injured and the bridge being congested for over an hour.
AI is being implemented in far more fields than automotive. In the courtroom, AI has been found useful in certain positions, but out of its depth in others. Transcription is one of the positions in which AI shines. Court stenographers are crucial for the courtroom. They provide a word for word record of the preceding for posterity.
AI has become capable of transcribing in real time. This technology can also be used in deposition and with other audio evidence to create a fuller searchable record. The biggest drawback with AI’s implementation is the stenographers also capture the context and nuance within the words spoken.
Language barriers are a significant barrier within the courtroom. There were over a million interpretations a year over the past four fiscal years. Court proceedings represent the highest volume of interpretations coming up to 75% of the total. California has the highest need for interpreters. Even after years of searching and incentivizing, the California’s Court Interpreters Advisory Panel struggles to acquire enough translation services.
AI can translate in real time for witnesses, defendants, and plaintiffs who are non-English speakers rendering it a helpful tool. The use of AI may not only lead to more inclusive proceedings, but also make sure justice is not blocked by a language barrier.
The issue in AI translation and stenography is within nuance and emotion. Computers need to accurately translate both literal and idiomatic expressions. The monotone voice of AI may alter what was an emotionally charged testimony. There’s also a conversation for how AI may alter the perceived credibility of a witness. AI has not developed to understand the complexities of human language.
Along with the justice system, AI is invading the medical field. Due to its large data capabilities, AI can diagnose quicker and more effectively. They also streamline drug discovery. Virtual nursing assistants help monitor patients and can schedule appointments.
AI has even found its way into the surgeries. In 2017, the Maastricht Medical Center in the Netherlands used an AI driven robot for a microsurgery. The robot was used to suture blood vessels back together in a patient who had lymphedema. The blood vessels were 0.03 and 0.08 millimeters. The robot, created by Microsure, was guided by a human surgeon. His hand movements were altered to smaller and more accurate robot movements. AI robots can help reduce the amount of human error such as handshake by surgeons.
Though extremely helpful, AI robots aren’t ready to conduct surgeries without the guidance of a surgeon. AI processes collect data, but it can’t truly think yet. They won’t be helpful if an incident occurs, or critical thinking is required. They don’t understand the intricacies of human emotion and won’t be able to provide post-op care the way people need. When people use the free chatbot ChatGPT, it scans the internet for relevant materials, but it doesn’t know how to fact check. It will take an encyclopedia entry with the same weight as a middle schoolers’ blog.
AI is a great tool, but it is only that.
Category:
User login
Omaha Daily Record
The Daily Record
222 South 72nd Street, Suite 302
Omaha, Nebraska
68114
United States
Tele (402) 345-1303
Fax (402) 345-2351