Re “A.I. Poses ‘Risk of Extinction,’ Tech Leaders Warn” (front page, May 31):
I read with great interest your article about the potential existential threat of artificial intelligence. Clearly, much anxiety exists concerning the recent, and rapid, developments in so-called large language models, such as that used in ChatGPT. I acknowledge the viewpoints of the very prominent people mentioned in the article.
However, we at the Silicon Valley Laboratory have an unpopular belief that artificial intelligence is fast approaching a limit on how well it can perform higher-order cognitive functions such as creativity.
Specifically, we do not believe that with current approaches, computers can truly be creative in any domain — be it art, music, science, business, mathematics or any field requiring novel thoughts of a human. We believe that A.I. machines have hit an “imitation barrier.”
Applications such as ChatGPT do generate responses to questions, but the results are merely reconfigurations of data and prior knowledge loaded by humans.
Computer programs do exist for creating paintings in the style of past artists. But these artifacts are imitations, not creations. The “aha moment” is missing.
Rowland Chen San Jose, Calif. The writer is the C.E.O. of the Silicon Valley Laboratory.
To the Editor:
From the very people who are racing to produce the most powerful artificial intelligence technology imaginable comes this bleak and truly scary warning: Like nuclear weapons, A.I. systems have the potential to threaten humanity with “the risk of extinction.” And much sooner than most people realize.
What exactly are we supposed to do with this information? It’s as though Icarus were telling the world, “Don’t forget to wear plenty of sunscreen.”
Nancy Stark New York
To the Editor:
Regulation of A.I. by the U.S. or any imaginable consortium of nations will have absolutely zero net positive effect for society. Bad actors — including not only China, Russia and terrorists, but also purveyors of misinformation and disinformation — will not be dissuaded by unenforceable rules.
The real beneficiaries of regulation of A.I. will be the signers of the statement warning of the risks of A.I., who will undoubtedly have a heavy hand in drafting regulations that shield their companies from legal liabilities and competition.
Paul S. Wilson New Orleans
To the Editor:
As a physician forced to use an electronic medical record constantly, I experience at least 10 major daily disruptions for every one convenience that the technology provides. A.I. will have the same level of unreliability, but with far more danger.
There were compelling reasons to develop nuclear weapons during World World II, and yet many scientists and politicians involved in the project regretted their work. Given that there is less need for A.I. and, by its creators’ admission, at least as much danger as nukes, why continue the work at all? Why not stop before it’s too late?
A.I. will never be reliable, just like every other computer created to date. Why should we believe that any protections we implement will be more successful?
A wise, fictional computer from the classic 1983 movie “WarGames” once said, “The only winning move is not to play.”
Brian Broker Philadelphia
To the Editor:
Artificial intelligence has a hole one could drive a freight train through. For all its flash and bang and awesome abilities, it is unable to discern the difference between truth and fiction. That, my friends, is the job of human beings, and will, in all probability, be forever so.
A.I. dazzles, but should never be trusted to produce reliable and factually correct results unless the parameters are strictly laid out by people using it before it is set in motion.
John Zeigler Georgetown, Texas
To the Editor:
The fear that large language models will become so “smart” they may threaten human existence is clearly overblown. Our fear should not be that they will become too smart, but rather that, by doing our writing for us, they will make us less smart.
Stephen Polit Belmont, Mass.
To the Editor:
The ongoing discussion regarding the potential threat that A.I. poses leaves out one important advantage that humans have over technology.
A person can walk over to the wall and remove the power cord from the outlet.
Eric Schroeder Bethesda, Md.
When a Public Bathroom Produces Anxiety
To the Editor:
Re “Bathrooms Are Where People Are Most Vulnerable,” by Lydia Polgreen (column, May 21):
Thank you to Ms. Polgreen for her brilliant piece. I am a cisgender woman who is more to the masculine than the feminine side, and every — and I do mean every — visit to a public restroom is a tremendous source of anxiety.
The funny looks, the double takes as women coming into the bathroom check the sign to be sure they’re going into the right door, my own efforts to try to soften my self-presentation — like everyone else, I just need to “ease myself,” to quote a Nigerian toilet euphemism that Ms. Polgreen cites.
Yet nothing about those visits is easy. Add the Republican hysteria to the mix, and a simple visit that so many take for granted becomes even more anxiety-producing.
Unless you have lived this, you probably don’t care. But I appreciate Ms. Polgreen’s giving voice to an experience that I thought was uniquely mine.
Butch women, it’s time to unite for the right to rest in the restroom!
Cynthia Robins Cochranville, Pa.
To the Editor:
Every time I open the door of a restroom with a unisex sign, ubiquitous in this region, I relish the image of horror and discomfort on the faces of those unsuspecting visitors who may be unwilling to “ease” themselves rather than accept an increasingly common gender-neutral social norm.
Dianne Selditch Norwalk, Conn.
To the Editor:
I’m a straight woman. Until sports and entertainment venues build more bathrooms for women, I will continue to use the men’s bathroom versus standing in a very long line with crossed legs. Really, people, it’s just a toilet.
April Bennett Stone Boulder, Colo.
‘Immoral’ Conditions in State Prisons
To the Editor:
“‘Blue Wall’ Inside State Prisons Protects Abusive Guards” (news article, May 28) describes, in graphic detail, the unlawful harms that incarcerated persons suffer in New York State prisons.
Guards in the “cover-up culture” described protect one another from accountability for injuring and sometimes killing inmates. Legal remedies often fail because prisoners know that if they report abuse, they will suffer the guards’ retaliation.
The horrendous punishment of solitary confinement is handed out at the whim of correctional officers, who regularly make up charges against prisoners and then function as judge, jury and executioner, all under the cover of our criminal justice system.
This situation is immoral and untenable. It merits a grand jury investigation.
Mary Clark Moschella Guilford, Conn. The writer is a professor of pastoral care at Yale Divinity School.
We and our partners use cookies on this site to improve our service, perform analytics, personalize advertising, measure advertising performance, and remember website preferences.Ok
Home » Analysis & Comment » Opinion | Are the Warnings About A.I. Overblown?
Opinion | Are the Warnings About A.I. Overblown?
More from our inbox:
To the Editor:
Re “A.I. Poses ‘Risk of Extinction,’ Tech Leaders Warn” (front page, May 31):
I read with great interest your article about the potential existential threat of artificial intelligence. Clearly, much anxiety exists concerning the recent, and rapid, developments in so-called large language models, such as that used in ChatGPT. I acknowledge the viewpoints of the very prominent people mentioned in the article.
However, we at the Silicon Valley Laboratory have an unpopular belief that artificial intelligence is fast approaching a limit on how well it can perform higher-order cognitive functions such as creativity.
Specifically, we do not believe that with current approaches, computers can truly be creative in any domain — be it art, music, science, business, mathematics or any field requiring novel thoughts of a human. We believe that A.I. machines have hit an “imitation barrier.”
Applications such as ChatGPT do generate responses to questions, but the results are merely reconfigurations of data and prior knowledge loaded by humans.
Computer programs do exist for creating paintings in the style of past artists. But these artifacts are imitations, not creations. The “aha moment” is missing.
Rowland Chen
San Jose, Calif.
The writer is the C.E.O. of the Silicon Valley Laboratory.
To the Editor:
From the very people who are racing to produce the most powerful artificial intelligence technology imaginable comes this bleak and truly scary warning: Like nuclear weapons, A.I. systems have the potential to threaten humanity with “the risk of extinction.” And much sooner than most people realize.
What exactly are we supposed to do with this information? It’s as though Icarus were telling the world, “Don’t forget to wear plenty of sunscreen.”
Nancy Stark
New York
To the Editor:
Regulation of A.I. by the U.S. or any imaginable consortium of nations will have absolutely zero net positive effect for society. Bad actors — including not only China, Russia and terrorists, but also purveyors of misinformation and disinformation — will not be dissuaded by unenforceable rules.
The real beneficiaries of regulation of A.I. will be the signers of the statement warning of the risks of A.I., who will undoubtedly have a heavy hand in drafting regulations that shield their companies from legal liabilities and competition.
Paul S. Wilson
New Orleans
To the Editor:
As a physician forced to use an electronic medical record constantly, I experience at least 10 major daily disruptions for every one convenience that the technology provides. A.I. will have the same level of unreliability, but with far more danger.
There were compelling reasons to develop nuclear weapons during World World II, and yet many scientists and politicians involved in the project regretted their work. Given that there is less need for A.I. and, by its creators’ admission, at least as much danger as nukes, why continue the work at all? Why not stop before it’s too late?
A.I. will never be reliable, just like every other computer created to date. Why should we believe that any protections we implement will be more successful?
A wise, fictional computer from the classic 1983 movie “WarGames” once said, “The only winning move is not to play.”
Brian Broker
Philadelphia
To the Editor:
Artificial intelligence has a hole one could drive a freight train through. For all its flash and bang and awesome abilities, it is unable to discern the difference between truth and fiction. That, my friends, is the job of human beings, and will, in all probability, be forever so.
A.I. dazzles, but should never be trusted to produce reliable and factually correct results unless the parameters are strictly laid out by people using it before it is set in motion.
John Zeigler
Georgetown, Texas
To the Editor:
The fear that large language models will become so “smart” they may threaten human existence is clearly overblown. Our fear should not be that they will become too smart, but rather that, by doing our writing for us, they will make us less smart.
Stephen Polit
Belmont, Mass.
To the Editor:
The ongoing discussion regarding the potential threat that A.I. poses leaves out one important advantage that humans have over technology.
A person can walk over to the wall and remove the power cord from the outlet.
Eric Schroeder
Bethesda, Md.
When a Public Bathroom Produces Anxiety
To the Editor:
Re “Bathrooms Are Where People Are Most Vulnerable,” by Lydia Polgreen (column, May 21):
Thank you to Ms. Polgreen for her brilliant piece. I am a cisgender woman who is more to the masculine than the feminine side, and every — and I do mean every — visit to a public restroom is a tremendous source of anxiety.
The funny looks, the double takes as women coming into the bathroom check the sign to be sure they’re going into the right door, my own efforts to try to soften my self-presentation — like everyone else, I just need to “ease myself,” to quote a Nigerian toilet euphemism that Ms. Polgreen cites.
Yet nothing about those visits is easy. Add the Republican hysteria to the mix, and a simple visit that so many take for granted becomes even more anxiety-producing.
Unless you have lived this, you probably don’t care. But I appreciate Ms. Polgreen’s giving voice to an experience that I thought was uniquely mine.
Butch women, it’s time to unite for the right to rest in the restroom!
Cynthia Robins
Cochranville, Pa.
To the Editor:
Every time I open the door of a restroom with a unisex sign, ubiquitous in this region, I relish the image of horror and discomfort on the faces of those unsuspecting visitors who may be unwilling to “ease” themselves rather than accept an increasingly common gender-neutral social norm.
Dianne Selditch
Norwalk, Conn.
To the Editor:
I’m a straight woman. Until sports and entertainment venues build more bathrooms for women, I will continue to use the men’s bathroom versus standing in a very long line with crossed legs. Really, people, it’s just a toilet.
April Bennett Stone
Boulder, Colo.
‘Immoral’ Conditions in State Prisons
To the Editor:
“‘Blue Wall’ Inside State Prisons Protects Abusive Guards” (news article, May 28) describes, in graphic detail, the unlawful harms that incarcerated persons suffer in New York State prisons.
Guards in the “cover-up culture” described protect one another from accountability for injuring and sometimes killing inmates. Legal remedies often fail because prisoners know that if they report abuse, they will suffer the guards’ retaliation.
The horrendous punishment of solitary confinement is handed out at the whim of correctional officers, who regularly make up charges against prisoners and then function as judge, jury and executioner, all under the cover of our criminal justice system.
This situation is immoral and untenable. It merits a grand jury investigation.
Mary Clark Moschella
Guilford, Conn.
The writer is a professor of pastoral care at Yale Divinity School.
Source: Read Full Article