Gurteen Knowledge Letter
Issue 288 – June 2024
For over 20 years, I have helped numerous individuals initiate Knowledge Cafés within their organizations. If you’re interested in learning how to get started, take a look here. Feel free to email me for further advice; I would be happy to help you.
Contents
- AI and the Art of Remixing
The postproduction paradigm - How We Get to Knowledge
Not through rational methods alone - Are Large Language Models Sentient?
Sentience is the capacity to experience sensations, thoughts, and feelings - How Could You or I Be Wrong About This?
A crucial question to ask in a conversation - Chatgpt, Explained
Is ChatGPT going to take over the world? - The Real Loss in Losing an Argument
The opportunity to engage in a constructive, enlightening conversation - On Conversation by Ben Franklin
The Pennsylvania Gazette, October 15, 1730 - Humans + AI Learning Community!
Keeping up with AI developments - Help Keep My Work Alive
- Unsubscribe
- Gurteen Knowledge Letter
AI and the Art of Remixing
The postproduction paradigm
AI language models are revolutionizing creative expression by mirroring the remixing techniques of postproduction art. This new capability challenges traditional notions of original authorship, allowing creators to curate, remix, and collaborate with existing cultural materials in innovative ways.
I have written about this concept in my blook, and Donald Clark has also discussed it in his blog and LinkedIn, renaming it "postcreation" and writing:
Postcreation: a new world. AI is not the machine, it is now ‘us’ speaking to ‘ourselves’, in fruitful dialogue.
Credit: Donald Clark
This perspective offers an exciting glimpse into the impact of LLMs and Generative AI. These technologies hold immense potential with far-reaching consequences. Over the long term, their influence will be transformative across various fields.
How We Get to Knowledge
Not through rational methods alone
I love this quote by David Weinberger because it accurately describes how we acquire knowledge—not just through rational methods but also via curiosity, social bonds, intuition, and even mistakes. His crucial insight that "knowledge is not determined by information" is spot-on. In complex situations, having data isn't enough; it's deciding which information matters that truly defines knowledge.
We get to knowledge — especially "actionable" knowledge — by having desires and curiosity, through plotting and play, by being wrong more often than right, by talking with others and forming social bonds, by applying methods and then backing away from them, by calculation and serendipity, by rationality and intuition, by institutional processes and social roles.
Most important in this regard, where the decisions are tough and knowledge is hard to come by, knowledge is not determined by information, for it is the knowing process that first decides which information is relevant, and how it is to be used.
Are Large Language Models Sentient?
Sentience is the capacity to experience sensations, thoughts, and feelings
Ross Dawson started a recent interesting thread on LinkedIn on whether Large Language Models are or will be sentient. The thread was prompted by an article by Stanford luminary Fei-Fei Li, who argues that AI isn't sentient because it lacks subjective experiences.
Sentience is the capacity to experience sensations, thoughts, and feelings, and to have subjective experiences. It involves being aware of one’s surroundings, having a sense of self, and possessing the ability to perceive and respond to stimuli. Sentient beings are considered to have a level of consciousness, and their experiences are subjective and unique to them.
Tag: sentience (2)
This provoked me to chat with ChatGPT on my iPhone, and I'm 98% convinced that current large language models (LLMs) are not sentient. At the end of my conversation, I asked ChatGPT to provide three references to articles claiming that LLMs are sentient. None of the articles stated this claim definitively, but the article below systematically asks: Could AI be sentient with Large Language Models?
The bottom line of the article: "... we can't decisively confirm or deny the sentience of current LLMs, and that "finding a conclusion counterintuitive or repugnant is not sufficient reason to reject the conclusion". Thus, we should at least take the hypothesis seriously and the prospect of AI sentience even more seriously."
Side note: These days, I assign a percentage to my beliefs and challenge others to persuade me to adjust that percentage. There is nothing I am 100% sure about.
How Could You or I Be Wrong About This?
A crucial question to ask in a conversation
Intellectual humility and open-mindedness are vital for constructive dialogue. Polarization and confirmation bias hinder productive conversations on complex issues. Asking How could you or I be wrong about this? promotes self-reflection, critical thinking, and openness to alternative perspectives.
Chatgpt, Explained
Is ChatGPT going to take over the world?
If you feel you do not understand the basics of ChatGPT, this introduction, ChatGPT Explained, may be of help.
The Real Loss in Losing an Argument
The opportunity to engage in a constructive, enlightening conversation
When you lose an argument because your opponent resorts to dishonest argumentative strategies, such as personal attacks and attempts to make the exchange emotional, the actual loss isn't losing the argument itself. The real tragedy lies in the missed opportunity for a meaningful, thought-provoking dialogue that could have enriched both of you.
It's not about winning or losing; it's about the exchange of ideas and perspectives. When an argument devolves into a rhetorical battle rather than a respectful conversation, both sides miss out on the chance to challenge their own beliefs, consider alternative viewpoints, and potentially reach a deeper understanding of the issue.
Ultimately, it's not about one person losing; it's about both individuals losing the opportunity to engage in a constructive, enlightening conversation.
On Conversation by Ben Franklin
The Pennsylvania Gazette, October 15, 1730
In "On Conversation," an essay published in The Pennsylvania Gazette on October 15, 1730, Ben Franklin addressed the issue of people lacking the skills to engage effectively in conversation. Interestingly, Franklin's advice focused more on being pleasant and likable rather than on cultivating deep, thought-provoking discussions.
I have written more about Franklin's essay in my blook.
Humans + AI Learning Community!
Keeping up with AI developments
If you are interested in AI and wish to keep up with developments, you may like to join the Humans + AI Learning Community!, created by Ross Dawson. It is an online community, free to join and has over 200 members.
Help Keep My Work Alive
For almost 25 years, I’ve been sharing the Gurteen Knowledge Letter each month, and many of you have been reading it for five years or more. My Knowledge Café also reached a milestone, celebrating its 20th anniversary in September 2022.
If my work has made a difference to you, I’d be grateful if you could consider supporting it. A small monthly donation or any one-off contribution would greatly help cover some of my website hosting costs.
Thank you to the 50+ patrons who already support me – your generosity means a lot.
Unsubscribe
If you no longer wish to receive this newsletter, please reply to this email with "no newsletter" in the subject line.
Gurteen Knowledge Letter
The Gurteen Knowledge Letter is a free monthly email newsletter designed to inspire thinking around Conversational Leadership and Knowledge Management. You can explore the archive of past issues here.
If you're not already subscribed, you can sign up to receive it by email each month.
Feel free to share, copy, or reprint any part of this newsletter with friends, colleagues, or clients, as long as it's not for resale or profit and includes proper attribution. If you have any questions, please get in touch with me.
David Gurteen
Gurteen Knowledge
Fleet, United Kingdom