Home > Author > Christof Koch

Christof Koch QUOTES

4 " This brings me to an objection to integrated information theory by the quantum physicist Scott Aaronson. His argument has given rise to an instructive online debate that accentuates the counterintuitive nature of some IIT's predictions.
Aaronson estimates phi.max for networks called expander graphs, characterized by being both sparsely yet widely connected. Their integrated information will grow indefinitely as the number of elements in these reticulated lattices increases. This is true even of a regular grid of XOR logic gates. IIT predicts that such a structure will have high phi.max. This implies that two-dimensional arrays of logic gates, easy enough to build using silicon circuit technology, have intrinsic causal powers and will feel like something. This is baffling and defies commonsense intuition. Aaronson therefor concludes that any theory with such a bizarre conclusion must be wrong.
Tononi counters with a three-pronged argument that doubles down and strengthens the theory's claim. Consider a blank featureless wall. From the extrinsic perspective, it is easily described as empty. Yet the intrinsic point of view of an observer perceiving the wall seethes with an immense number of relations. It has many, many locations and neighbourhood regions surrounding these. These are positioned relative to other points and regions - to the left or right, above or below. Some regions are nearby, while others are far away. There are triangular interactions, and so on. All such relations are immediately present: they do not have to be inferred. Collectively, they constitute an opulent experience, whether it is seen space, heard space, or felt space. All share s similar phenomenology. The extrinsic poverty of empty space hides vast intrinsic wealth. This abundance must be supported by a physical mechanism that determines this phenomenology through its intrinsic causal powers.
Enter the grid, such a network of million integrate-or-fire or logic units arrayed on a 1,000 by 1,000 lattice, somewhat comparable to the output of an eye. Each grid elements specifies which of its neighbours were likely ON in the immediate past and which ones will be ON in the immediate future. Collectively, that's one million first-order distinctions. But this is just the beginning, as any two nearby elements sharing inputs and outputs can specify a second-order distinction if their joint cause-effect repertoire cannot be reduced to that of the individual elements. In essence, such a second-order distinction links the probability of past and future states of the element's neighbours. By contrast, no second-order distinction is specified by elements without shared inputs and outputs, since their joint cause-effect repertoire is reducible to that of the individual elements. Potentially, there are a million times a million second-order distinctions. Similarly, subsets of three elements, as long as they share input and output, will specify third-order distinctions linking more of their neighbours together. And on and on.
This quickly balloons to staggering numbers of irreducibly higher-order distinctions. The maximally irreducible cause-effect structure associated with such a grid is not so much representing space (for to whom is space presented again, for that is the meaning of re-presentation?) as creating experienced space from an intrinsic perspective. "

Christof Koch , The Feeling of Life Itself: Why Consciousness Is Widespread But Can't Be Computed

6 " The cause-effect information is defined as the smaller (minimum) of the cause-information and the effect-information. If either one is zero, the cause-effect information is likewise zero. That is, the mechanism's past must be able to determine its present, which, in turn, must be able to determine its future. The more the past and the future are specified by the present state, the higher the mechanism's cause-effect power.
Note that this usage of 'information' is very different from its customary meaning in engineering and science introduced by Claude Shannon. Shannon information, which is always assessed from the external perspective of an observer, quantifies how accurately signals transmitted over some noisy communication channel, such as a radio link or an optical cable, can be decoded. Data that distinguishes between two possibilities, OFF and ON, carries 1 bit of information. What that information is, though - the result of a critical blood test or the least significant bit in a pixel in the corner of a holiday photo - completely depends on the context. The meaning of Shannon information is in the eye of the beholder, not in the signals themselves. Shannon information is observational and extrinsic.
Information in the sense of integrated information theory reflects a much older Aristotelian usage, derived from the Latin in-formare, 'to give form or shape to.' Integrated information gives rise to the cause-effect structure, a form. Integrated information is causal, intrinsic, and qualitative: it is assessed from the inner perspective of a system, based on how its mechanisms and its present state shape its own past and future. How the system constrains its past and future states determines whether the experience feels like azure blue or the smell of wet dog. "

Christof Koch , The Feeling of Life Itself: Why Consciousness Is Widespread But Can't Be Computed