In order for the communication of any system of thought or belief to be successful, a definition and theory of truth are necessary. Truth is a criterion by which we judge a proposition, or a quality by which we determine a proposition to be factual. Questions of truth are not just abstract philosophical discussions, they give us the means to justify our approach to knowledge. There is of course a plurality of views in terms of philosophical approaches to truth, but there is one particular perspective that is dominant in the history of philosophy that has slowly become challenged by contemporary philosophers. The theory of truth which has been widely established in our thought is a foundationalist theory of truth, where our conclusions are true because they follow from the basic beliefs or axioms that can be taken to be true with certainty. This approach known as foundationalism implies that there is a basic ground for knowledge that is final, absolute and unchanging. Complex theories and propositions must have a relation with this basic ground, and through axiomatic reasoning we can discover true knowledge. This kind of foundationalism is also often compatible with another theory; the correspondence theory of truth. According to this theory, propositions are true based on how well they relate to the world and describe existing phenomena. In order to evaluate these theories of truth we must look at various foundational approaches, understand the problems that they solve and the functions that they serve. Then we must look for exceptions or holes in foundationalism and the major criticisms against these theories of truth that negate them. Lastly, the impacts of a change in our theory of truth must be addressed. In this analysis it will be shown that all areas of knowledge which rely upon axiomatic reasoning whether it is scientific, mathematical or philosophical knowledge, can not be treated as absolutely true. It will become evident that even in the areas of knowledge that we take to be absolute there is no basic underlying ground of knowledge that is unchangingly true, and all knowledge is ultimately contingent. It will be argued that part of the reason why foundationalism must fail is that reality itself is ontologically incomplete, so no basic ground of knowledge that is final and certain may be developed. Our conclusion will establish a thoroughly non-foundational, non-absolute approach to truth and reality.
When exploring the field of epistemic justification we run into a problem that is commonly dubbed the Munchausen Trilemma. If someone were to make a proposition and assert that it is true, we would certainly have to ask them how they know that it is true. In response there can only be three kinds of answers.
Firstly, you can employ a form of circular reasoning in which the proposition and the justification validate each other.
Secondly, you can employ an argument of infinite regress in which each justification requires more propositions to justify them.
Lastly, there can be an axiomatic argument, which claims that we can derive true propositions from certain premises that have to be taken as true.
Foundationalism as a theory of epistemic justification is concerned with the final type of response to the Munchausen Trilemma, deeming an argument of infinite regress and circular reasoning to be unacceptable, and finding the only valid theory of epistemic justification to be one that rests upon previously accepted, certainly true axioms. However, there are many critics of foundationalism that are skeptical of the idea that certainly true grounds exist in order for us to form axioms out of them. Here, we seek to understand this criticism and evaluate axiomatic arguments in order to demonstrate why foundationalism fails as a theory of epistemic justification.
Aristotle was a foundationalist philosopher who supposed the existence of basic, certainly true beliefs underpinning every apparently true proposition. His theory of epistemic justification is merely an analysis of the relation between a proposition and its the axioms, and using logic we can verify if a proposition necessarily follows from axiomatic truth. If it does, then we can be certain that our proposition is true. Aristotle’s argument for foundationalism is a process of elimination which establishes that infinite regress and circular argumentation is fallacious, thus foundationalism must be the only choice. An example however of what Aristotle would consider an axiom would be the three laws of classical logic which he established around 350 BCE. These are the law of identity, law of noncontradiction and law of the excluded middle. The law of identity states that any given X is identical to itself, X = X. The law of noncontradiction states that two contradictory propositions cannot be simultaneously true, that is to say is X is mutually exclusive with Not X. The law of the excluded middle states that each proposition is determinately either true or false; that is to say there is no such proposition X that is neither true nor false. What is notable about these axioms is that they are essentially axiomatic to most systems of thought regardless of what discipline they are in, since the assumption that truth abides by the laws of logic, is the most fundamental assumption. However, Lawrence Krauss among other quantum physicists argue that what we know about the operation of our universe in the form of quantum mechanics actually implies that classical laws of logic are not absolutely true. Krauss demonstrates that some phenomena in quantum mechanics actually violate the law of noncontradiction, which goes to demonstrate that the foundational axioms to many of our propositions are not certainly true in the sense that they do not absolutely apply to all phenomena without exception. Thus, if it can be demonstrated that these axioms are not entirely universal then there is a case to be made against strong foundationalism which argues truth stems from the basic ground of certainly true beliefs.
The demonstration of the non-universality of the classical laws of logic follows directly from the insights of quantum mechanics. One fundamental take away from quantum mechanics is that particles exist in a state of superposition and the observer effect means that during an observation of a particle it probabilistically inhabits one of many potential positions that exist in the superpositions of all possibilities. The double slit experiment which shoots a particle through two narrow slits leads to this clear conclusion. During this experiment a particle is going through both slits, none of the slits and only one slit all at once, and these probabilities for the particle exist in a state of superposition. The movement of the electron is inherently probabilistic but as soon as we observe the movement of the particles, the different quantum probabilities and superpositions collapse into one single state. When we shoot particles they leave as matter which are then turned into a probabilistic wave of potential, then merged back into particles, which interfere with each other and create the diffraction pattern which particle matter should not create, finally hitting the wall like a particle. Now if we look at the logic behind this sequence of events that have been empirically observed we can see a clear contradiction with classical logical laws. A product of quantum mechanics is an understanding of wave-particle duality in which all quantum entities behave in two contradictory manners, as particle and as wave, and as a result of how they are observed as behaving the same matter can be given two separate identities. Two logical impossibilities have occurred; quantum entity x behaves as wave x and particle x when the two are contradictory and this quantum entity x is always identical to both wave x and particle x at the same time. Furthermore, given that we can not state determinately whether at any given moment a proposition about the nature of a quantum entity behaving as a particle or a wave is true or false, the answer being completely subject to incalculable probability, the final law of the excluded middle is also not being followed here. Thus, scientifically observed, empirical and existent phenomena in the form of quantum mechanics displays to us that three classical laws of logic are non-universal. What this means is that our theory of epistemic justification can not rely on a basic, universally true foundation in classical logic to underpin all our belief. This isn’t to say logical systems are no longer true or that truth derived from logical laws is now false; merely we seek to demonstrate that even something as seemingly fundamental as classical logic can not be a universal framework or foundation of truth. In order to deny this conclusion there are some who question the validity of the arguments made here and take issue with how certain people such as Lawrence Krauss construct or deliver their argumentation, but the fact remains that the contradiction between the laws of logic and quantum mechanics is clear. It was even acknowledged early on in the history of quantum mechanics by the likes of Hungarian mathematician and physicist John Von Neumann who began developing a different propositional structure in the form of quantum logic because he believed the structure of classical logic could not support quantum mechanics.
In areas of knowledge that we consider to be certain such as mathematics and physics we find countless examples of axiomatic knowledge being contingent, and our most fundamental axioms being non-universal. For another example of this we can look at Frege’s construction of naive set theory and Russell’s paradox that is derived as a result of some of these axioms. In “The Foundations of Arithmetic” the philosopher and mathematician Gottlob Frege defined what a number is using naïve set theory. He stated that a number is an extension of a class; a particular object that is used to assert the numerosity of some concept. So to say there are three apples on the table is to say 3 is a concept that is equinumerous to the apples on the table. The important thing to note is that this definition leads all such extensions and concepts to be contained in a class of all such similar concepts that are equinumerous to other things. Frege’s definition of a number using naive set theory was mind blowing; he had defined a number in a simple, and useful manner successfully creating a foundational truth for arithmetics. However, Bertrand Russell figured that by a certain form of reasoning there was actually a paradox in Frege’s definition. Russell’s paradox can be simply stated in the following manner. Given Frege’s definition, classes containing elements can be contained within larger super classes. Thus we can have such a thing as a “class of all classes that have x”. Now, classes have other properties they can either be self-contained or not. Let’s suppose that there is a class of all classes that does not contain itself; that is to say all the classes that are not self-containing are put within one class. We can pose a question that will result in a paradox destroying Frege’s definition using naive set theory; does this class contain itself? The reason for the paradox is that if the class of all classes that do not contain themselves, does contain itself then it can not contain itself because it does not fit the definition required in order for it to be contained. However, if it does not contain itself then it must contain itself because not being contained in itself is the criteria for the classes it contains. Either way the situation results in a paradox and Frege’s definition of a number which had laid the foundation of arithmetics and naive set theory is contradicted. In the aftermath of Russell’s paradox numerous axiomatic set theories have emerged that seek to resolve paradoxes and are contingent upon certain axioms but the fact remains that no universal definition or axiom can exist without being contradicted or resulting in paradoxes.
It is important to note that in logical systems like mathematics and physics the non-universality of an axiom that has been demonstrated in cases above is not a cherry picked account of some anomalies but rather an actual insight about the contingency of knowledge in these fields. This is because any axiomatic system of knowledge is subject to Godel’s incompleteness theorem, which was formulated by Kurt Godel in 1931. Godel’s incompleteness theorems were groundbreaking developments in the field of modern logic in the sense that they challenged the foundations of mathematical beliefs. Godel’s theorems discuss the ability to prove the truth derived from a mathematical system. A mathematical system in this context can be thought of as a building, where the building blocks are the axioms according to which the building is shaped. We arrange consequent blocks on top of the axioms in a patterned manner in order for the system to continue. Godel’s first theorem suggests that for any axiomatic system that is said to be consistent, there exists some proposition in the progression of a system that can not be proved or disproved inside the system. Going back to our analogy, this is to say that there is some combination of blocks which can not be shown to be the right pattern in the construction of a building. Godel’s second theorem indicates that any such axiomatic system can not within its own limits prove that it is consistent. Both the theorems combined indicate that statements about the provability or consistency of an axiomatic system in its totality cannot be made. Godel’s theorem had a wide reaching impact in the field of mathematics, it set off a foundational crisis because it proved that there could never be a complete theory of mathematics derived from universally true axioms. The implications of Godel’s theorem reach far beyond mathematics; it indicates the problems with using axiomatic reasoning as a theory of epistemic justification. Aside from demonstrating the particular non-universalities in axiomatic systems of knowledge, we can use Godel’s theorem in order to demonstrate the incorrectness of any axiomatic system that is built on a strongly foundational approach to truth.
Even if these external critiques are discarded and we evaluate foundationalism immanently; we find it to be an insufficient theory of epistemic justification. In terms of the Munchausen trilemma, Aristotle resorted to foundationalism because he believed the other two options of circular reasoning or infinite regress are unacceptable. The question still remains however; how exactly does foundationalism solve these problems of epistemic justification? Is it not true that these beliefs that are accepted are epistemically unjustified? After all, axiomatic reasoning itself is subject to the first two portions of the trilemma and it may be the case that the axiom itself is established through circular reasoning or infinite regress. The greatest example of this is the Cartesian circle in the philosophy of foundationalist René Descartes. Descartes lays down an axiom of the existence of a non-deceptive God in order to demonstrate that our perceptions are reliable. However, the argument that Descartes makes for the proof of God’s existence must suppose that the perceptions we have are reliable. These arguments contain each other's axioms and are simply a form of circular reasoning that is disguised. Foundationalism and axiomatic argumentation can therefore be seen to be insufficient in order to provide proper epistemic justification. In response, foundationalists often reason that axioms of an argument are indubitable and here they revert to a kind of dogmatism in order to preserve the integrity of the arguments that result from their axioms. The issue with this is that no epistemic justification has been provided, the process has merely been delayed by constructing axioms or basic beliefs that uphold your proposition, and through some peculiar mechanism we have stated that these background assumptions do not require any epistemic justification. Although this is functional as a purely pragmatic perspective; it indicates that there is no foundation for truth or a basic ground for belief like strong foundationalists claim.
There are a significant subset of thinkers who believe that these arguments about the basic ground of belief and questions about epistemic justifications can be simply resolved through empirical observations. Positivists argue that by observing what is real we can verify our axioms and then form a basic ground of truth from which we can create more true propositions. The issue with this sort of argumentation is that empirical observations are part of axiomatic systems. This point is most clearly illustrated by the infamous Duhem Quine thesis. Pierre Duhem and W.V.O. Quine were two prominent philosophers of science who posited that no test of a scientific hypothesis that seeks to isolate an empirical result can make an isolated claim of truth. This is because in order to facilitate an empirical test you need auxiliary assumptions, or background assumptions that other previous hypotheses or methods of testing or accepted explanations and interpretations of results are true. These auxiliary assumptions are never tested in an empirical test and every test of these assumptions requires its own set of auxiliary assumptions and so on. Duhem intended for the thesis to be applied solely in the fields of experimental physics but Quine argued in “Two Dogmas of Empiricism” that this uncertainty of knowledge applied to all theories; mathematical, logical and philosophical included. This thesis indicates that our empirical observations are forms of axiomatic arguments which in turn shows that the naive idea of verifying our beliefs through empiricism is an insufficient theory of epistemic justification.
Even if we assume that empirical observations are a legitimate means of obtaining some sort of objective access to reality which we can utilize to form a foundation of truth; there are numerous contradictions to strong foundationalism that are raised based on our conception of the world. To state simply what this contention is; foundationalism must fail because our reality itself is ontologically incomplete, causing any of our basic beliefs to be always incomplete and needing constant changes. This is an argument that is often interpolated by the contemporary philosopher Slavoj Zizek, who makes the point using an analogy. Imagine you are in a video game and are approaching the horizon or some part of the game that your character is not currently interacting with. The pixels in the background begin to render as you get closer to the area, and are blurry when you are farther away from the area. The insights of quantum physics tells us that our physical reality is very much the same way. In our physical world, matter naturally exists in a state of superposition and potentiality rather than substantive actuality. It only transforms from one state to another through a complex wave collapse function which is probabilistic in nature and not certain. All of this points to a universe that is not fundamentally complete. If this is the case, then we are forced to admit that there is no complete definite and foundational ground for empirical reality. Therefore, any subsequent truth and knowledge of reality from axiomatic systems must be inherently contingent and probabilistic. In recent years, some scholars such as Adrian Johnston who has written about Zizek’s ontology, are talking about using these views of reality and incorporating them into a new form of scientific materialism that is not based on an absolute, foundational ground for truth.
In the final analysis it is evident that various kinds of foundationalism fail as a theory of epistemic justification. Various forms of logical foundationalism were described from Aristotle to Frege, but it was shown that the axioms that form these systems are not universal through instances like quantum physics not supporting classical logic or Russell’s paradox contradicting Frege’s definition of a number. Moreover, these are not specific instances of the non-universality of axioms but rather a generalized insight expounded on through Godel’s incompleteness theorems which tell us that an axiomatic system can not be demonstrably and universally true in its totality. In this critique we also stated that foundationalism does not make immanent sense as a theory of justification in the sense that it fails to address the trilemma. The axioms themselves are subject to circular reasoning which an example of which in philosophical arguments can be seen through the Cartesian circle, and the dogmatic idea of indubitable axioms is merely a necessity in order to delay infinite regress - an inevitability if each axiom in a system is examined. Despite the positivists' claims, empirical reality does not solve these problems with foundationalism as the Duhem Quine thesis shows, since the isolation of an empirical observation is itself is contingent on axiomatic argumentation, given that there are auxiliary hypotheses that facilitate observation which remain untested. Lastly, it is understood that strong foundationalism can not be feasible if reality itself can be understood as ontologically incomplete. This substantive list of critiques indicate to us that the philosophical views of strong foundationalism are deeply flawed as a definition of truth and as a theory of epistemic justification. Thus, there is no absolute ground for truth.