“Creative writing by OpenAI’s GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling…
[In Dr. Seuss style:]
You have brains in your head.—Gwern Bradwen, “GPT-3 Creative Fiction.” Gwern.net. June 19, 2020.
You have feet in your shoes.
You can steer yourself any direction you choose.
You’re on your way!
It’s interesting to read GPT-3’s take on different writing styles.
“…the more we automate, and the more sophisticated we make that automation, the more we become dependent on a highly skilled human operator.”-Adrian Colyer, “Ironies of automation.” the morning paper. January 8, 2020.
A robot surgeon might be a great idea, but it’s going to handle the routine, the easy surgeries. What’s left is what’s hard. That’ll be the new work for human surgeons.
And who fixes the surgeries that the robot got wrong? Who watches the robot surgeons and steps in when they can’t do they job?
This is true of automation in every area. The jobs it eliminates are the easy, routine jobs. With more automation, the level of difficulty simply goes up.
If the robot does the job better, then they get the job. But, someone who does the job better than robots will always have to evaluate their work and step in when the work is beyond them.
Where will we find such people, if we don’t become them?
“Without communication, connection, and empathy, it becomes easy for actors to take on the “gardener’s vision”: to treat those they are acting upon as less human or not human at all and to see the process of interacting with them as one of grooming, of control, of organization. This organization, far from being a laudable form of efficiency, is inseparable from dehumanization.”
—Os Keyes, “The Gardener’s Vision of Data.” Real Life. May 6, 2019.
“Computational thinking assumes that perfect information about the past can and should be collected and synthesized to inform decisions about the future.”
—John Thomason, “Is It Easier to Imagine the End of the World Than the End of the Internet?” The Intercept, November 24, 2018.
A review of the book, New Dark Age: Technology and the End of the Future by James Bridle. It is interesting throughout.
Bridle’s central point is about our mental models and that technology is not value neutral. John Thomason points out that technology isn’t just ideas but tangible capital from which the people investing in it are expecting a return.
Think about artificial intelligence. Once you introduce a technology that will fundamentally change the landscape, e.g., introducing autonomous vehicles on the roads, then the model that the autonomous vehicles use to make decisions will also have to change as they change the environment.
Easily said, but clearly some changes will happen that might be unknown factors influencing the model, not accounted for in its decision making, and so forth. One current example is how human biases get baked into training data and influences the decisions of the model. The problem can be very subtle and there may be no obvious solution to it, assuming people are aware of the problem at all and that it can be fixed.
“In September 2017, a screenshot of a simple conversation went viral on the Russian-speaking segment of the internet. It showed the same phrase addressed to two conversational agents: the English-speaking Google Assistant, and the Russian-speaking Alisa, developed by the popular Russian search engine Yandex. The phrase was straightforward: ‘I feel sad.’ The responses to it, however, couldn’t be more different. ‘I wish I had arms so I could give you a hug,’ said Google. ‘No one said life was about having fun,’ replied Alisa…
…’There is no such thing as a neutral accent or a neutral language. What we call neutral is, in fact, dominant’…
…In this way, neither Siri or Alexa, nor Google Assistant or Russian Alisa, are detached higher minds, untainted by human pettiness. Instead, they’re somewhat grotesque but still recognisable embodiments of certain emotional regimes – rules that regulate the ways in which we conceive of and express our feelings.”
—Polina Aronson, “The Quantified Heart.” Aeon. July 12, 2018.
“The success of deep learning systems has given us better machine perception. This is really useful. What it does well is matching or identifying patterns, very fast, for longer than you can reasonably expect people to do. It automates a small part of the glorious wonder of intuition. It also automates everything terrible about it, and adds brilliantly creative mistakes of its own. There is something wonderful about the idea of a machine that gets it completely, hopelessly wrong.”
—Yorksranter, “It was called a perceptron for a reason, damn it.” The Yorkshire Ranter. September 30, 2017.