Can Computers Think?

Furkan Sami AKYILDIZ
6 min readNov 7, 2023

--

Thinking robot (illustrated by Levente Szabo)

This story will examine what answers have been given in human history to the questions of whether machines can have consciousness and think like humans. It will discuss the debate between those who focus on the thought process of the human mind and those who assessing the actions taken as a result of thought.

Introduction

The question of how humans think has been a subject of research for centuries. Since we haven’t been able to derive an algorithm for how humans think, we can only task machines with the actions that result from this process. This is where the question of whether machines can think comes into play. Philosophers who focus on the thought process argue that machines can only mimic thinking, while scientists who approach the matter with measurable results express that if a machine can produce the output generated by a human, they are intelligent beings.

Debates on “Can Computer Think?”

In order to answer this question, it must first be decided whether the act of thinking should be considered as a process or an output. For example the computer scientist Edsger Dijkstra approaches that issue wisely, he said that the question of whether machines can think or not is about as relevant as the question of whether submarines can swim¹. A human being swims via some series of arm and leg movements. If you rephrase the questions in this way “Can a submarine swim using the same movements as a human?” the answer is no. On the other hand, if the question is like this “Can a submarine move through the water as capable as a human could or even far beyond that?” the answer is yes.

Turing transforms this question by removing philosophical conjecture and rendering it in a quantifiable and evidence-based form. In his famous paper, he states that if a machine can provide responses like a human, it can be considered as thinking like a human² The concept introduced by Turing as an Imitation Game and subsequently referred to as the Turing Test sparked numerous discussions³.Some philosophers argue that a computer displaying intelligent behavior may not engage in genuine thinking but rather simulate the process of thinking. Keith Gunderson stated something that can carry out a conversation with us can not possess the kind of intelligence that we have. He emphasized that thinking is not something that can be fully explained by a single example⁴.

Rudolf Arnheim said that not every problem that can be solved by intelligence can be solved only by intelligence. He said intelligence is a quality of mental process⁵. Since computers have not such process he claim that we can not say such machines intelligent. The method used by computers cannot be defined as intelligent unless the mental processes are defined by their outputs or our vision of mechanism of intelligence take a mechanist form. The method used by computers is running around many possible responses until bumping into a successful response. With all these statements, he’s essentially implying that machines can’t really think until the anatomy of thought is completely understood or the thinking process behind the computer’s output is explained.

John Searle argues that a machine can never truly think unless it does not follow a program. The human produces a thought as a result of a biological process; computers can at best simulate these biological processes⁶. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Ned Block criticizes the Turing Test from a behavioral point of view. His perspective underscores that intelligence is not just the ability to answer questions in a way that is indistinguishable from an intelligent person⁷.

Natural Intelligence and Machine Intelligence

Artificial Intelligence

In 1955, the term “artificial intelligence” was first introduced by John McCarthy. Artificial intelligence is any system that exhibits behavior that could be interpreted as human intelligence. However, this definition leads to another question: how do we define human intelligence? There’s no one standard for human intelligence, there are many different forms of human intelligence and that makes it very difficult to point to a computer and say that it’s intelligent⁸.
Problem solving, ability to adapt successfully and quickly to a new situation, learning from experience, generating new ideas and concepts can be shown as indicators of human intelligence⁹. McCarthy’s definition takes a functional approach to this issue by focusing on the outcomes and describes the human mind as the capacity to accomplish goals in the world.

The Boundaries of Machine Capabilities

Despite the significant advancement of artificial intelligence, there are still issues which humans can easily excel but AI struggles. For example, Bongard problems remain unsolvable by machine-level pattern recognition algorithms, primarily due to their reliance on creative thinking and human intuition. In the image below, while the initial and final points of the drawings on the left page are distant from each other, those on the right page are in proximity. AI struggles with such problems due to its limited capacity for generalization, making it incapable of finding a solution. Bongard problems still pose a challenge for modern day artificial intelligence and progress has been surprisingly slow¹⁰.

Bongard Problems #62

Humans have an innate ability to learn novel concepts from only a few samples and generalize these concepts to different situations¹¹. For instance consider the sequence [1, 4, 9, 61, ?]¹². Human intuition discerns a pattern that the sequence consists of the squares of numbers written in reverse order, although current AI algorithms may struggle to predict the next number.

We’ve always had new goals, new aspirations problems we wanted to solve. But we didn’t know how and some of these goals were considered so difficult, they really didn’t seem possible without genuine intelligence on the part of the computer. For our goals in trying to design “thinking machines” are constantly changing in relation to our ever-increasing resources in this area¹³. The capabilities of machines are continually evolving due to advancements in artificial intelligence and machine learning, so their abilities may expand in the future.

Conclusion

When evaluating a person’s intelligence, we make our assessment by considering their actions and behaviors rather than performing a brain dissection to monitor neuronal activity. Similarly, when evaluating computers, it is logical to give greater weight to output instead of delving into their internal mechanisms. Therefore, in answer to this question, we can say that machines can think, but not in exactly the same way as humans.

References

[1] E. W. Dijkstra, “The threats to computing science,” Nov. 1984, circulated privately. [Online]. Available: http://www.cs.utexas.edu/users/EWD/ewd08xx/EWD898.PDF

[2] A. M. TURING, “I. — COMPUTING MACHINERY AND INTELLI-
GENCE,” Mind, vol. LIX, no. 236, pp. 433–460, 10 1950. [Online].
Available: https://doi.org/10.1093/mind/LIX.236.433

[3] A. Pinar Saygin, I. Cicekli, and V. Akman, “Turing test: 50 years later,”
Minds and machines, vol. 10, no. 4, pp. 463–518, 2000.

[4] K. Gunderson, “The imitation game,” Mind, vol. 73, no. 290, pp.
234–245, 1964.

[5] R. Arnheim, Visual thinking. Univ of California Press, 1969.

[6] J. Searle, “Can computers think,” Minds, brains, and science, pp. 28–
41, 1984.

[7] N. Block, “The mind as the software of the brain,” New york, vol. 3,
pp. 377–425, 1995.

[8] J. McCarthy, “What is artificial intelligence?” 2004.

[9] R. Colom, S. Karama, R. Jung, and R. Haier, “Human intelligence
and brain networks,” Dialogues in clinical neuroscience, vol. 12, pp.
489–501, 12 2010.

[10] S. Depeweg, C. A. Rothkopf, and F. Jäkel, “Solving bongard problems
with a visual language and pragmatic reasoning,” 2018.

[11] C. M. Wu, E. Schulz, M. Speekenbrink, J. D. Nelson, and B. Meder,
“Generalization guides human exploration in vast decision spaces,”
Nature human behaviour, vol. 2, no. 12, pp. 915–924, 2018.

[12] Lecture Notes of M. Fatih Amasyali’s AI Class in Yildiz Technical University. Available: https://sites.google.com/view/mfatihamasyali/yapay-zeka

[13] M. L. Minsky, “Some methods of artificial intelligence
and heuristic programming,” 1959. [Online]. Available:
https://api.semanticscholar.org/CorpusID:7600609

--

--