By Jacob Rueda
Humankind has made efforts to streamline activity and production through the development of technology. From the first stone tools to the internet, humanity has created ways to make life easier for itself and others.
Each tool or device that is created takes away an inconvenience. For example, a bow and arrow made hunting an animal easier than running after it and killing it with bare hands.
When humans evolved from hunting and gathering to agriculture, they developed more tools and techniques to gradually adapt to the new living.
From agriculture, humans moved to industrial production. Again, technology had to evolve to satisfy the needs of the new era.
Some of those methods created greater amounts of waste as well. Humans developed technology to address that too. Each new development in one field led to challenges that required additional development to address them.
As technology developed, methods evolved to produce more. Likewise, human activity changed and people were able to move about with greater ability as a result.
Essentially, humans found more ways to create, produce, and solve with better and more advanced tools.
Now comes the age where humans extend beyond using physical tools and into digital and virtual ones. But as humans work to make these tools more convenient and beneficial, ethical concerns arise regarding the effect those tools will have on existence.
ARTIFICIAL INTELLIGENCE IN THE BEGINNING
Artificial intelligence is not a new concept. The framework for it has been around for some time. For example, a neurophysiologist named William Grey Walter developed a machine called a “turtle” in the 1940s that used sensors to guide it.
In the video, the “turtle” senses that it is low in charge and goes back to get more energy, much like a person who senses they are hungry goes to get food.
Twenty years later, Johns Hopkins University Applied Physics Laboratory experimented with A.I. machines. This one is called the “beast.”
(Johns Hopkins Applied Physics Laboratory)
The “beast” did not use computers to get around. Rather, it used circuitry made up of transistors. It also used light sensors and sonar to detect where it was going.
Although today’s A.I. is much more advanced than that, it retains a basic principle: to work autonomously from humans. Given that, there are fields that are experimenting with using it to help improve methods of production and detection.
ARTIFICIAL INTELLIGENCE IN INDUSTRY
Incorporating A.I. into the medical field is something that is being looked at from different angles. At MIT, for example, researchers looked at how using machine learning and physics could be used in manufacturing medications. The purpose, according to the research, is to “increase efficiency and accuracy, resulting in fewer failed batches of products.”
It takes a person to isolate the active ingredient in a medication to then dry it. If a person fails to observe something during that process, it could lead to a bad batch which researchers tell Science Tech Daily is “serious.”
Adopting a machine-learning component combined with what is referred to as a physics-enhanced autocorrelation-based estimator, or PEACE, is something researchers say could make the process of pill manufacturing more efficient.
Law enforcement has gotten into the A.I. game as well in a way similar to the 1956 story The Minority Report by Philip K. Dick.
With the help of A.I., agencies sift through data and work to determine patterns in criminality to determine where and when a crime will be committed, and by whom.
(Scenes from Minority Report, 2002, 20th Century Studios)
Despite the pros of using artificial knowledge in industries, there are cons as well.
DRAWBACKS
Yeshimabeit Milner is the director of Data for Black Lives, an activist organization she helped form in 2017. The goal of the organization is to stop the use of data to marginalize Black people.
In a paper published in the Social Science Research Network website, researchers Rashida Richardson, Jason M. Schultz and Kate Crawford wrote that vendors of predictive policing programs rarely reveal how their systems work or how data is used.
Those same vendors also exclude data in their programs that address biased and discriminatory actions by police.
The researchers concluded that there are risks in relying too much on data that is potentially skewed when it comes to addressing public safety.
They also argue that data in predictive policing could be used to keep certain populations marginalized.
“There’s a long history of data being weaponized against Black communities,” Milner told the MIT Tech Review in 2020.
(Scenes from Colossus: The Forbin Project, 1970, Universal Pictures)
Privacy is another concern when it comes to A.I. According to the Brookings Institute, “data from mobile phones and other online devices expand the volume, variety, and velocity of information about every facet of our lives.”
It also says that doing so makes privacy “a global public policy issue” and that A.I. will “accelerate this trend” by magnifying the ability to use personal data in ways that are intrusive.
Worse yet, there is the idea that A.I. will lend humans to destroy themselves completely.
On May 30, 2023, scientists and other notable figures including Bill Gates and Canadian singer Grimes (real name Claire Elise Boucher) signed a one-sentence open letter saying:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Arjun Ramani, a journalist who writes for The Economist, appeared on the podcast The Intelligence saying that while A.I. doesn’t pose such a threat at the moment, people worry that as it develops, it could. However, he also says that the future of A.I. development is uncertain.
“We don’t know if these powerful A.I.’s will actually want to do anything bad to humans, or whether they’d be designed in a way in which they could,” he said.
Despite this, Ramani says general purpose forecasters predicted a high probability of a catastrophe associated with A.I. by the year 2100. But superforecasters, which are individuals with a proven record of consistent forecasting, say the probability is considerably less.
FALLOUT
Regardless of whether people will annihilate themselves or not, A.I. is impacting in ways that are affecting industries in drastic ways.
Workers in different industries worry that A.I. could affect their ability to be employed. According to a May 2023 report by research and outplacement firm Challenger, Gray & Christmas (yes, Christmas), out of the roughly 80,000 jobs that were cut in May, 3,900 of those were due to A.I.
Insider reported in June 2023 on the industries who could see job losses because of A.I., including jobs in tech, market research, education, and media, to name a few.
Speaking of media, the battle between A.I. and those in entertainment has gotten heated. The Writers Guild of America, an organization representing writers for television shows and movies, went on strike in May.
Among other stipulations, writers wanted to limit the use of an A.I. language model known as ChatGPT. Specifically, they wanted it used as a tool to help writers rather than serve as a replacement for talent.
A few months later in July, the Screen Actors Guild (SAG-AFTRA) went on strike against the Alliance of Motion Picture and Television Producers (AMPTA). Among the reasons for the strike was the use of A.I. in replacing actors with simulated creations bearing their likeness.
“The entire business model has been changed by streaming, digital, [and] A.I.” said SAG president and actress Fran Drescher during a press conference on July 13, 2023.
“If we don’t stand tall right now, we are all going to be in trouble. We are all going to be in jeopardy of being replaced by machines.” she said.
Despite the outrage at the possibility of A.I. replacing individuals, there are some who have embraced it in their work.
Shannon Ahern is a 27-year-old high school math teacher in Dublin, Ireland. In an essay for Insider, she wrote that she was initially scared of ChatGPT. After using it to plan lessons and find resources, the fear went away.
“I was intimidated to try it out,” she said, “but I knew that ChatGPT would only become more popular, and I didn't want to be left behind.” After using it, Ahern said her productivity went “through the roof.”
Even though it has aided in her career, Ahern writes that ChatGPT “isn’t perfect” and she has found ways to correct errors made by it.
Like Ramani said, it is uncertain how A.I. will evolve. Depending on its use and development, it could serve as a tool that advances human understanding and ability. It could also be that harbinger of doom others worry it could be.
Regardless, its strength and weakness are recognized and, again like Ramani said, it is too early to tell what effect it will have on humanity in the long term. In the short term, some people have discovered that if used improperly, it can have disastrous, though not necessarily catastrophic, results.
(From the LegalEagle YouTube Channel by Devin Stone)
Comentários