In blog#1, I provided a high-level view of Tacit Knowledge with examples from everyday experience and discussed some properties of tacit knowledge. In this blog, I will introduce the two principal researchers that have worked on this fascinating subject: Michael Polanyi and Harris Collins.
Michael Polanyi was a chemical engineer turned philosopher who investigated tacit knowledge and contributed to the related field of the Cause of Freedom in Science. Harry Collins is a sociologist who aims to solve some slippery questions, including the explicit, the tacit knowledge, and the implications of AI. Polanyi and Collins’s theories are different and offer independent, incompatible answers to more-or-less the same questions. Collins built a novel conceptual framework for tacit and explicit knowledge. In contrast, Polanyi simplifies tacit and explicit knowledge. Everything that is represented in the language is explicit, but the language always conveys less than what we know. Therefore, explication in Polanyi’s theory is always incomplete [1,2].
The idea of tacit knowledge is not new but was first formally introduced by the Hungarian chemical engineer turned philosopher Michael Polanyi in 1966 in his seminal book Tacit Dimension, where he argues that tacit knowledge —tradition, inherited practices, implied values, and prejudgments— is crucial for scientific knowledge. Our ability to learn while doing and find solutions when “we feel our way to success” depends on a pre-conscious process of integrating many parts that produce new emergent facts and holistic knowledge.
Therefore, while it can be described with rules, our knowledge cannot be reduced to a static rule set that Polanyi described in his book with the famous “we can know more than we can tell.”. Then he explains: “We know a person’s face, and can recognize it among a thousand. . . . We recognize the moods of the human face, without being able to tell, except quite vaguely, by what signs we know it. . . . But can it not be argued . . . that the possibility of teaching these appearances by practical exercises proves that we can tell our knowledge of them? The answer is that we can do so only by relying on the pupil’s intelligent co-operation for catching the meaning of the demonstration.” [Polanyi 1969].
“We can know more than we can tell.”
Michael Polanyi – The Tacit Dimension (1966)
Polanyi describes tacit knowledge with the now famous example of riding a bike. Nearly everyone knows from their own experience how they learned to ride a bike. The crucial part of learning to ride a bike is the ability to balance and ride independently; amazingly, it seems to have been acquired in the same manner by everyone, at least since bicycling became a standard part of our culture. Kids start steering on the right and the left to contrast gravity that eventually takes over, and they fall screaming. They do not give up and start again to learn from their errors and find a different way to balance. Kids start biking without a clear understanding of how it happens, but it is a magic moment celebrated with family despite the scratches!
All bike riders also agree that they didn’t acquire this knowledge by reading books, learning the physics of balance, or studying in a master class for bike riders the gyroscopic effect and any other physical amenities. “Knowledge-how is a concept logically prior to knowledge-that” then, nobody needs to get a Ph.D. in Physics to start biking. A robot biker has a different form of biking. It is programmed to be more efficient without making mistakes when learning to ride a bike. Humans know how to ride a bike, and robots know what to do. It is tacit knowledge (procedural) vs. explicit knowledge (declarative, propositional). We can say that a robot has got its form of tacit knowledge different from that of humans despite being explicitly programmed by humans.
Polanyi explains that knowledge is simultaneously tacit and explicit (Polanyi, 1983). Can we have a more nuanced categorization of tacit knowledge? Yes, according to the investigations carried on by the sociologist Harry Collins in Tacit and Explicit Knowledge (2010), where he argues that the tacit knowledge domain can be represented as a continuum including three instances of slightly overlapping kinds of tacit knowledge:
The three flavors of Colins’ Tacit Knowledge constitute the core of the investigation provided in Tacit and Explicit Knowledge. The book is not an easy read. It sometimes contains controversial theses expressed in a vivid language but always provides an exceptional grounding to discover what enables us to move around in the world, what is our mode of knowing, teaching, and learning, and how we are different from machines. For Collins, any automatic machines are social prostheses, “The way a physical prosthesis such as an artificial heart works can only be understood by watching the way it interacts with the rest of the body. Likewise, the way a social prosthesis works cannot be understood by examining it in isolation but only by looking at the way it fits into the web of activities in which every other human activity is embedded” [Riberiro & Collins, 2007].
We learn CTK only by participating in a social world and interacting with other people. We do not know how to make this knowledge explicit and transfer it into machines. Where RTK is embodied in societal relationships between individuals, STK is embodied in the human body. Still, CTK is embodied in culture and society and peculiar only to humans and human organizations.
Animals and machines are unaware of the cultural world in which we live. Therefore, it is quite surprising, misguided, and dangerous the decision of the European Parliament Resolution on Civil Law Rules of Robotics and its recommendations to the Commission on Civil Law rules on Robotics in its paragraph 59 f) to provide legal status for robots as electronic persons. A group of EU scholars in AI and robotics has immediately expressed their deep concern and asked other EU scientists to sign an open letter, considering that “this statement offers many biases based on an overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction and a few recent sensational press announcements.”
Collins’s investigations provide also a strong argument against The Myth of Sentient Machines. Machines and pieces of ‘explicit’ knowledge, such as instruction manuals and books, are deceptive. Their meaning seems to be carried within them, but actually, it is provided by us. Their potential lies in the tacit knowledge and social understanding brought to their use by both their producers and their users. This is acquired through common enculturation and socialization within similar groups or forms of life [Riberiro & Collins, 2007].
Therefore, it should be evident how the recent unfounded claims that Google’s AI is becoming sentient, “AI is a human creation and the words that came from the ‘mouth’ of LaMDA were scoured from human inputs online in a search directed by Lemoine’s questions. The system ‘felt’ nothing.” (2022). AI is good as the humans driving it. AI is technical and social practices, institutions and infrastructures, politics, and culture; the social dimension of AI is important for ensuring that the technologies are there to serve people and not replace or decide for them.
The same criticism can be raised for the artistic capabilities of DALL-E 2, which is just gluing things together without understanding their relationships: “DALL-E 2’s difficulty with even basic spatial relations (such as in, on, under) suggests that whatever it has learned, it has not yet learned the kinds of representations that allow humans to so flexibly and robustly structure the world”.
The intuition behind the performances of DALL-E 2 or GPT-3 should be linked to the geometric properties of high-dimensional vectors, which differ from vectors in the low-dimensional spaces of common human experience. In particular, the ‘magic’ is somehow related to the kissing numbers — the maximum number of non-overlapping unit spheres that can touch a unit sphere in a D-dimensional Euclidean space. The kissing number increases exponentially, for 2-D is 6, for 3-D is 12, but for 1024-D has an astonishing lower bound of ~10E62 . An embedding in the high dimensionalities commonly used in Large Language Models (LLM) may then designate not a specific meaning in a semantic space but a range of potential meanings or a diffused ambiguous meaning, which can be easily misled with creativity by a human user. Humans can recollect previous experiences together, where scene construction constitutes a common process underlying episodic memory and imagination of fictitious experiences. Machines, not yet. Moreover, DALL-E 2 and GPT-3 do not have any CTK awareness.
In the next blog, we will explore the unexpected relationships between Tacit Knowledge and AI.
Personal views and opinions expressed are those of the author.