[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = incomputability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7194 KiB  
Article
Will Zero Vulnerability Computing (ZVC) Ever Be Possible? Testing the Hypothesis
by Fazal Raheman, Tejas Bhagat, Brecht Vermeulen and Peter Van Daele
Future Internet 2022, 14(8), 238; https://doi.org/10.3390/fi14080238 - 30 Jul 2022
Cited by 2 | Viewed by 3336
Abstract
Life without computers is unimaginable. However, computers remain vulnerable to cybercrimes, a USD 6 trillion industry that the world has come to accept as a “necessary evil”. Third-party permissions resulting in an attack surface (AS) and in-computer storage that computers mandate are key [...] Read more.
Life without computers is unimaginable. However, computers remain vulnerable to cybercrimes, a USD 6 trillion industry that the world has come to accept as a “necessary evil”. Third-party permissions resulting in an attack surface (AS) and in-computer storage that computers mandate are key design elements that hackers exploit, formerly by remote malware installation and later by stealing personal data using authentication faking techniques. In legacy computers, the AS cannot be completely eliminated, nor can a connected device retain data offline, rendering fool-proof cybersecurity impossible. Although the architects of legacy computers made perfectly reasonable engineering trade-offs for their world, our world is very different. Zero vulnerability computing (ZVC) challenges the impossible with in-computer offline storage (ICOS) and Supra OS (SOS), to deliver comprehensive protection against vulnerabilities. The feasibility of ZVC is demonstrated in a tiny permanently computer-mounted hardware wallet, providing the first evidence of the complete obliteration of the AS. Malware cannot infect the ZVC device on account of lacking an AS, nor can personal data be hacked as they mostly remain offline, except for sporadic processing. Further research should explore whether ZVC can fully secure computers in more complex real-world scenarios and open a new epoch in the evolution of computers and the Internet. Full article
(This article belongs to the Section Cybersecurity)
Show Figures

Figure 1

Figure 1
<p>The exponential growth of connected devices (Statista.com).</p>
Full article ">Figure 2
<p>Every day, the AV-TEST Institute registers over 450,000 new malicious programs (malware) and potentially unwanted applications (PUA). Of total threats examined, 95.30% were malware and 4.70% were PUAs (source: <a href="https://www.av-test.org" target="_blank">https://www.av-test.org</a>, accessed on 27 June 2022).</p>
Full article ">Figure 3
<p>State-of-the-art cybersecurity approaches compared with ZVC’s zero attack surface approach.</p>
Full article ">Figure 4
<p>(<b>a</b>) In the traditional computing system, the third-party permissions create primary and secondary attack surfaces that bad actors exploit to breach the computer. (<b>b</b>) Zero vulnerability computing (ZVC) completely obliterates the attack surface to achieve a zero attack surface. (<b>c</b>) Conceptual diagrammatic representation of primary and secondary attack surfaces. (<b>d</b>) Diagrammatic representation of SOS obliterating the attack surface and providing a web interface. (<b>e</b>) ICOS with a soft switch and LED indicator. (<b>f</b>) ICOS operation with a hard toggle switch.</p>
Full article ">Figure 5
<p>Comparison of architectural constructs of 3rd party permissions and attack surface between legacy IoT systems and ZVC.</p>
Full article ">Figure 6
<p>Tiny ZVC device mounted on a USB port of a host PC.</p>
Full article ">Figure 7
<p>Illustration of the security of a switchable ZVC hardware wallet that incorporates both the SOS and ICOS modules of ZVC.</p>
Full article ">Figure 8
<p>The design and dimensions (mm) of the ZVC powered hardware wallet device.</p>
Full article ">Figure 9
<p>Visual depiction of the test device, the control device, and the reference device mounted on Node0 (target node).</p>
Full article ">Figure 10
<p>JFed layout of the user interfaces of the test device and the control device mounted on Node0 (target node).</p>
Full article ">Figure 11
<p>Interconnection between Node0 and Node1.</p>
Full article ">Figure 12
<p>The private key stored in the .config folder in the control (blue) as well as the ZVC (yellow) device.</p>
Full article ">Figure 13
<p>Command line interface with malware command executed for stealing the private keys.</p>
Full article ">Figure 14
<p>Screenshots of the command line interface showing the status of the .config file in the control and test devices.</p>
Full article ">Figure 15
<p>Diagrammatic illustration of the conventional multi-layered computer architecture in comparison to the projected compact zero attack surface design of ZVC.</p>
Full article ">
17 pages, 29939 KiB  
Review
VLP-Based COVID-19 Vaccines: An Adaptable Technology against the Threat of New Variants
by Wasim A. Prates-Syed, Lorena C. S. Chaves, Karin P. Crema, Larissa Vuitika, Aline Lira, Nelson Côrtes, Victor Kersten, Francisco E. G. Guimarães, Mohammad Sadraeian, Fernando L. Barroso da Silva, Otávio Cabral-Marques, José A. M. Barbuto, Momtchilo Russo, Niels O. S. Câmara and Gustavo Cabral-Miranda
Vaccines 2021, 9(12), 1409; https://doi.org/10.3390/vaccines9121409 - 30 Nov 2021
Cited by 21 | Viewed by 8657
Abstract
Virus-like particles (VLPs) are a versatile, safe, and highly immunogenic vaccine platform. Recently, there are developmental vaccines targeting SARS-CoV-2, the causative agent of COVID-19. The COVID-19 pandemic affected humanity worldwide, bringing out incomputable human and financial losses. The race for better, more efficacious [...] Read more.
Virus-like particles (VLPs) are a versatile, safe, and highly immunogenic vaccine platform. Recently, there are developmental vaccines targeting SARS-CoV-2, the causative agent of COVID-19. The COVID-19 pandemic affected humanity worldwide, bringing out incomputable human and financial losses. The race for better, more efficacious vaccines is happening almost simultaneously as the virus increasingly produces variants of concern (VOCs). The VOCs Alpha, Beta, Gamma, and Delta share common mutations mainly in the spike receptor-binding domain (RBD), demonstrating convergent evolution, associated with increased transmissibility and immune evasion. Thus, the identification and understanding of these mutations is crucial for the production of new, optimized vaccines. The use of a very flexible vaccine platform in COVID-19 vaccine development is an important feature that cannot be ignored. Incorporating the spike protein and its variations into VLP vaccines is a desirable strategy as the morphology and size of VLPs allows for better presentation of several different antigens. Furthermore, VLPs elicit robust humoral and cellular immune responses, which are safe, and have been studied not only against SARS-CoV-2 but against other coronaviruses as well. Here, we describe the recent advances and improvements in vaccine development using VLP technology. Full article
Show Figures

Figure 1

Figure 1
<p>The adaptive immune response generated by VLPs immunization and VLPs classification. (<b>A</b>) After immunization, VLPs are phagocytized by dendritic cells or macrophages. Then, they are carried out to lymphatic vessels, where the antigenic regions will be processed and presented by class II MHC molecules (CD4+ T cells) and, through cross-presentation, by class I (CD8+ T cells). Immunological pathway activation by immunization with VLPs will activate robust cellular (cytokines) and humoral (B cell-antibodies) immune responses. (<b>B</b>) VLPs are classified as nonenveloped (neVLPs) or enveloped VLPs (eVLPs) based on the absence or presence of a lipidic membrane, respectively. These particles can also be classified as homologous or heterologous VLPs according to their composition. Homologous VLPs are assembled using proteins from the native pathogen only (blue), and heterologous VLPs can be assembled using proteins or peptides from different sources (black and blue).</p>
Full article ">Figure 2
<p>SARS-CoV-2 structural proteins and the different states of the Spike protein. (<b>A</b>) Schematic representation of the SARS-CoV-2 viral particle. The structure of the SARS-CoV-2 viral particle is composed of four structural proteins: Membrane (M), Envelope (E), Nucleocapsid (N), and Spike (S). The S protein is found in two different states on viral particles: open state (minor population) and closed state (major population). In addition, during the membrane fusion process (host cell entry), the S protein can be found in the fusion state (fusion S). (<b>B</b>) Schematic representation of the binding of open-state S (PDB ID 7498) to the ACE2 receptor present in the host cell. The illustrations were made in free software (CellPaint 2.0 [<a href="#B91-vaccines-09-01409" class="html-bibr">91</a>] and 3D Protein Imager [<a href="#B92-vaccines-09-01409" class="html-bibr">92</a>]). The binding figure was made using the crystal structure of ACE2 bound to Spike available at the Protein Data Bank (PDB ID 7A98).</p>
Full article ">Figure 3
<p>Structure and domain organization of the SARS-CoV-2 Spike (S) protein. (<b>A</b>) The S structure comprises a cytoplasmic domain (CD, white), a transmembrane domain (TM, black), and an ectodomain, which is divided into two subunits, S1 (gray) and S2 (dark gray). The magnification shows the several disulfide bridges (DB, yellow) and the glycosylation sites (GlcNAc, green) through the S protein ectodomain. It is highlighted in red, the S1/S2 interface. The receptor-binding domain (RBD, in cyan) and the receptor-binding motif (RBM, magenta) are also shown in S1. (<b>B</b>) As mentioned in <a href="#vaccines-09-01409-f002" class="html-fig">Figure 2</a>, the S protein shows two conformers on viable viruses (closed and open state). The upper panel shows the S protein in the closed state (trimeric and monomeric state). The bottom panel shows the S protein in the open state (trimeric and monomeric state). Illustrations were made in PyMol [<a href="#B110-vaccines-09-01409" class="html-bibr">110</a>] using the wild-type structures available from Zhang et al. [<a href="#B107-vaccines-09-01409" class="html-bibr">107</a>,<a href="#B111-vaccines-09-01409" class="html-bibr">111</a>].</p>
Full article ">Figure 4
<p>Mapping mutations of SARS-CoV-2 variants of concern (VOCs) and phenotypes. Red: mutations; Cyan: receptor-binding domain (RBD); Magenta: receptor binding motif (RBM); Light gray: S1; Dark gray: S2; Yellow: Heptad repeat 1; Green cyan: fusion peptide 1; Slate: Fusion peptide 2; Green: Signal peptide. Illustrations were made in PyMol [<a href="#B110-vaccines-09-01409" class="html-bibr">110</a>] using resources from Zhang et al. [<a href="#B107-vaccines-09-01409" class="html-bibr">107</a>,<a href="#B111-vaccines-09-01409" class="html-bibr">111</a>].</p>
Full article ">Figure 5
<p>Enveloped and nonenveloped VLPs against SARS-CoV-2.</p>
Full article ">
15 pages, 1424 KiB  
Article
A Note on the Reality of Incomputable Real Numbers and Its Systemic Significance
by Gianfranco Minati
Systems 2021, 9(2), 44; https://doi.org/10.3390/systems9020044 - 12 Jun 2021
Cited by 4 | Viewed by 2872
Abstract
We discuss mathematical and physical arguments contrasting continuous and discrete, limitless discretization as arbitrary granularity. In this regard, we focus on Incomputable (lacking an algorithm that computes in finite time) Real Numbers (IRNs). We consider how, for measurements, the usual approach to dealing [...] Read more.
We discuss mathematical and physical arguments contrasting continuous and discrete, limitless discretization as arbitrary granularity. In this regard, we focus on Incomputable (lacking an algorithm that computes in finite time) Real Numbers (IRNs). We consider how, for measurements, the usual approach to dealing with IRNs is to approximate to avoid the need for more detailed, unrealistic surveys. In this regard, we contrast effective computation and emergent computation. Furthermore, we consider the alternative option of taking into account the properties of the decimal part of IRNs, such as the occurrence, distribution, combinations, quasi-periodicities, and other contextual properties, e.g., topological. For instance, in correspondence with chaotic behaviors, quasi-periodic solutions, quasi-systems, uniqueness, and singularities, non-computability represents and corresponds to theoretically incomplete properties of the processes of complexity, such as emergence and quantum-like properties. We elaborate upon cases of equivalences and symmetries, characterizing complexity and infiniteness as corresponding to the usage of multiple non-equivalent models that are constructively and theoretically incomplete due to the non-exhaustive nature of the multiplicity of complexity. Finally, we detail alternative computational approaches, such as hypercomputation, natural computing, quantum computing, and analog and hybrid computing. The reality of IRNs is considered to represent the theoretical incompleteness of complex phenomena taking place through collapse from equivalences and symmetries. A world of precise finite values, even if approximated, is assumed to have dynamics that are zippable in analytical formulae and to be computable and symbolically representable in the way it functions. A world of arbitrary precise infinite values with dynamics that are non-zippable in analytical formulae, non-computable, and, for instance, sub-symbolically representable, is assumed to be almost compatible with the coherence of emergence. The real world is assumed to be a continuous combination of the two—functioning and emergent—where the second dominates and is the norm, and the first is the locus of primarily epistemic extracts. Research on IRNs should focus on properties representing and corresponding to those that are detectable in real, even if extreme, phenomena, such as emergence and quantum phenomena. Full article
Show Figures

Figure 1

Figure 1
<p>A generic schema of an ANN.</p>
Full article ">Figure 2
<p>The Cluster 3 is spontaneously established by movement of cluster 2.</p>
Full article ">Figure 3
<p>The selection mechanism is the core of DYSAM. The selection is implemented as a strategy, for example, in ensemble learning and evolutionary game theory.</p>
Full article ">Scheme 1
<p>A schematic representation considered in number theory.</p>
Full article ">
6 pages, 211 KiB  
Review
How Incomputable Is Kolmogorov Complexity?
by Paul M.B. Vitányi
Entropy 2020, 22(4), 408; https://doi.org/10.3390/e22040408 - 3 Apr 2020
Cited by 19 | Viewed by 3857
Abstract
Kolmogorov complexity is the length of the ultimately compressed version of a file (i.e., anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. We discuss the incomputability of [...] Read more.
Kolmogorov complexity is the length of the ultimately compressed version of a file (i.e., anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. We discuss the incomputability of Kolmogorov complexity, which formal loopholes this leaves us with, recent approaches to compute or approximate Kolmogorov complexity, which approaches are problematic, and which approaches are viable. Full article
(This article belongs to the Special Issue Review Papers for Entropy)
23 pages, 1170 KiB  
Article
Agent Inaccessibility as a Fundamental Principle in Quantum Mechanics: Objective Unpredictability and Formal Uncomputability
by Jan Walleczek
Entropy 2019, 21(1), 4; https://doi.org/10.3390/e21010004 - 21 Dec 2018
Cited by 4 | Viewed by 7458
Abstract
The inaccessibility to the experimenter agent of the complete quantum state is well-known. However, decisive answers are still missing for the following question: What underpins and governs the physics of agent inaccessibility? Specifically, how does nature prevent the agent from accessing, predicting, and [...] Read more.
The inaccessibility to the experimenter agent of the complete quantum state is well-known. However, decisive answers are still missing for the following question: What underpins and governs the physics of agent inaccessibility? Specifically, how does nature prevent the agent from accessing, predicting, and controlling, individual quantum measurement outcomes? The orthodox interpretation of quantum mechanics employs the metaphysical assumption of indeterminism—‘intrinsic randomness’—as an axiomatic, in-principle limit on agent–quantum access. By contrast, ontological and deterministic interpretations of quantum mechanics typically adopt an operational, in-practice limit on agent access and knowledge—‘effective ignorance’. The present work considers a third option—‘objective ignorance’: an in-principle limit for ontological quantum mechanics based upon self-referential dynamics, including undecidable dynamics and dynamical chaos, employing uncomputability as a formal limit. Given a typical quantum random sequence, no formal proof is available for the truth of quantum indeterminism, whereas a formal proof for the uncomputability of the quantum random sequence—as a fundamental limit on agent access ensuring objective unpredictability—is a plausible option. This forms the basis of the present proposal for an agent-inaccessibility principle in quantum mechanics. Full article
(This article belongs to the Special Issue Emergent Quantum Mechanics – David Bohm Centennial Perspectives)
Show Figures

Figure 1

Figure 1
<p>Quantum super-indeterminism [<a href="#B60-entropy-21-00004" class="html-bibr">60</a>]. The shortcomings of the orthodox view, which are revealed by the simple concept of super-indeterminism, in the attempt to prove, or justify, the metaphysics behind quantum indeterminacy, are recognized increasingly. The fallacy of circular reasoning is illustrated in <a href="#entropy-21-00004-f001" class="html-fig">Figure 1</a>, which arises from the use of the intrinsic randomness assumption in support of the free choice assumption, which—in turn—rationalizes the presumably “free” selection of measurement settings. Bera et al. [<a href="#B76-entropy-21-00004" class="html-bibr">76</a>], for example, have confirmed the fact of ‘super-indeterminism’ by noting that there is indeed present “…an unavoidable <span class="html-italic">circulus vitiosus</span>” in any tests for true randomness, because any available tests for “…the indeterministic character of the physical reality” must presume that “…it is, in fact, indeterministic.” Similar arguments have been put forth by, and prior developments were summarized in, Landsman [<a href="#B77-entropy-21-00004" class="html-bibr">77</a>].</p>
Full article ">Figure 2
<p>Illustration of the irreducible interdependency of basic assumptions that are implicit in standard interpretations of orthodox quantum mechanics (adapted from Walleczek and Grössing [<a href="#B27-entropy-21-00004" class="html-bibr">27</a>,<a href="#B83-entropy-21-00004" class="html-bibr">83</a>]). (<b>A</b>) Free choice assumption, (<b>B</b>) Intrinsic randomness assumption, and (<b>C</b>) Axiomatic non-signaling assumption. Importantly, the validity of interpreting the non-signaling theorem as a foundational theorem, or axiom, for quantum mechanics, i.e., one which would imply strict indeterminism as the only viable option for interpreting quantum theory, depends on the independent validity of assumptions (<b>A</b>,<b>B</b>). However, neither assumption (<b>A</b>) nor assumption (<b>B</b>) can be confirmed independently if the possibility of ‘free choice’ depends on the existence of a process that is intrinsically random and vice versa (compare <a href="#entropy-21-00004-f001" class="html-fig">Figure 1</a>). Therefore, for example, the observation of EPR-type nonlocal correlations in the laboratory does not represent empirical proof for the indeterministic nature of the locally observed measurement outcomes, if that proof relies on the employment of an axiomatic non-signaling theorem (for more details see Walleczek and Grössing [<a href="#B27-entropy-21-00004" class="html-bibr">27</a>]).</p>
Full article ">Figure 3
<p>Agent inaccessibility as a function of (<b>A</b>) Intrinsic randomness versus (<b>B</b>) Effective ignorance (adapted from Walleczek [<a href="#B60-entropy-21-00004" class="html-bibr">60</a>]). Intrinsic randomness represents the orthodox interpretation of quantum mechanics, which is universal indeterminism. There, the presence of the experimenter agent introduces an apparent metaphysical dualism between agent and world (see the main text for additional explanations), which is indicated by the <span class="html-italic">closed</span> line that encloses the presence of the experimenter agent (<a href="#entropy-21-00004-f003" class="html-fig">Figure 3</a>A). By contrast, in universal or global determinism, agents and the physical universe are subject to the same fundamental determinism, whereby, there, the experimenter agent is an integral element of the physical universe, i.e., agent and universe together constitute a lawful, physical continuum (e.g., Szilard [<a href="#B69-entropy-21-00004" class="html-bibr">69</a>]), as is indicated by the <span class="html-italic">open</span> line (see <a href="#entropy-21-00004-f003" class="html-fig">Figure 3</a>B). In this picture, the experimenter agent constitutes an entity possessing distinct ‘epistemic’ as well as ‘agentic’ properties (for definitions see <a href="#sec4dot3-entropy-21-00004" class="html-sec">Section 4.3</a>). For a detailed explanation of an axiomatic (<a href="#entropy-21-00004-f003" class="html-fig">Figure 3</a>A) versus an effective (<a href="#entropy-21-00004-f003" class="html-fig">Figure 3</a>B) non-signaling constraint—in the context of Bell’s nonlocality theorem—consult Walleczek and Grössing [<a href="#B27-entropy-21-00004" class="html-bibr">27</a>]. Briefly, an axiomatic non-signaling constraint (see also <a href="#entropy-21-00004-f002" class="html-fig">Figure 2</a>) is compatible with the violation of measurement outcome independence, which is the standard violation in the context of orthodox quantum theory; by contrast, an effective non-signaling constraint is thought to be compatible with the violation of setting or parameter independence (Shimony [<a href="#B87-entropy-21-00004" class="html-bibr">87</a>]), which is the standard violation in the context of an ontological quantum mechanics such as dBB-theory in a universally deterministic universe (<a href="#sec3dot3-entropy-21-00004" class="html-sec">Section 3.3</a>).</p>
Full article ">Figure 4
<p>Agent inaccessibility as a function of (<b>A</b>) Intrinsic randomness versus (<b>B</b>) Objective ignorance (adapted from Walleczek [<a href="#B60-entropy-21-00004" class="html-bibr">60</a>]). Intrinsic randomness represents the orthodox interpretation of quantum mechanics, which is universal indeterminism (see legend to <a href="#entropy-21-00004-f003" class="html-fig">Figure 3</a> for an explanation of the nature of the experimenter agent). Objective ignorance, by contrast, advances the alternative proposal that quantum mechanics in a universally deterministic universe (i.e., global determinism) could account for (objective) quantum unpredictability as defined by an in-principle limit (<a href="#entropy-21-00004-f004" class="html-fig">Figure 4</a>B). Please note that a prior report referred to a related proposal by the term ‘intrinsic complexity’ [<a href="#B60-entropy-21-00004" class="html-bibr">60</a>] due to the fact that such an option is available for complex systems dynamics. An objective non-signaling constraint, which is proposed here as an option that may underlie the non-signaling theorem of quantum mechanics, is equally governed by an objective, in-principle constraint; that is, the capacity for operational control by the experimenter agent (for definition see <a href="#sec4dot3-entropy-21-00004" class="html-sec">Section 4.3</a>) of, for example, time-symmetric, or nonlocal, ontic influences, or information transfers, is formally and objectively limited by the unavailability to the agent of either (i) infinitely precise knowledge about (time-symmetric) initial conditions, or (ii) infinite computational, or generally technological, resources, or a combination of (i) and (ii). For an overview, see <a href="#entropy-21-00004-t001" class="html-table">Table 1</a>.</p>
Full article ">
228 KiB  
Article
Metacomputable
by Piotr Bołtuć
Entropy 2017, 19(11), 630; https://doi.org/10.3390/e19110630 - 22 Nov 2017
Cited by 1 | Viewed by 3741
Abstract
The paper introduces the notion of “metacomputable” processes as those which are the product of computable processes. This notion seems interesting in the instance when metacomputable processes may not be computable themselves, but are produced by computable ones. The notion of computability used [...] Read more.
The paper introduces the notion of “metacomputable” processes as those which are the product of computable processes. This notion seems interesting in the instance when metacomputable processes may not be computable themselves, but are produced by computable ones. The notion of computability used here relies on Turing computability. When we talk about something being non-computable, this can be viewed as computation that incorporates Turing’s oracle, maybe a true randomizer (perhaps a quantum one). The notions of “processes” is used broadly, so that it also covers “objects” under the functional description; for the sake of this paper an object is seen as computable if processes that fully describe relevant aspects of its functioning are computable. The paper also introduces a distinction between phenomenal content and the epistemic subject which holds that content. The distinction provides an application of the notion of the metacomputable. In accordance with the functional definition of computable objects, sketched out above, it is possible to think of objects, such as brains, as being computable. If we take the functionality of brains relevant for consideration to be their supposed ability to generate first-person consciousness, and if they were computable in this regard, it would mean that brains, as generators of consciousness, could be described, straightforwardly, by Turing-computable mathematical functions. If there were other, maybe artificial, generators of first-person consciousness, then we could hope to design those as Turing-computable machines as well. However, thinking of such generators of consciousness as computable does not preclude the stream of consciousness being non-computable. This is the main point of this article—computable processes, including functionally described machines, may be able to generate incomputable products. Those processes, while not computable, are metacomputable—by regulative definition introduced in this article. Another example of a metacomputable process that is not also computable would be a true randomizer, if we were able to build one. Presumably, it would be built according to a computable design, e.g., by a machine designed using AutoCAD, that could be programmed into an industrial robot. Yet, its product—a perfect randomizer—would be incomputable. The last point I need to make belongs to ontology in the theory of computability. The claim that computable objects, or processes, may produce incomputable ones does not commit us to what I call computational monism—the idea that non-computable processes may, strictly speaking, be transformed into computable ones. Metacomputable objects, or processes, may originate from computable systems (systems will be understood here as complex, structured objects or processes) that have non-computable admixtures. Such processes are computable as long as those non-computable admixtures are latent, or otherwise irrelevant for a given functionality, and they are non-computable if the admixtures become active and relevant. Ontology, in which computational processes, or objects, can produce non-computable processes, or objects, iff the former ones have non-computable components, may be termed computational dualism. Such objects or processes may be computable despite containing non-computable elements, in particular if there is an on and off switch of those non-computable processes, and it is off. One kind of such a switch is provided, in biology, by latent genes that become active only in specific environmental situations, or at a given age. Both ontologies, informational dualism and informational monism, are compatible with some non-computable processes being metacomputable. Full article
1425 KiB  
Review
Brain. Conscious and Unconscious Mechanisms of Cognition, Emotions, and Language
by Leonid Perlovsky and Roman Ilin
Brain Sci. 2012, 2(4), 790-834; https://doi.org/10.3390/brainsci2040790 - 18 Dec 2012
Cited by 15 | Viewed by 12243
Abstract
Conscious and unconscious brain mechanisms, including cognition, emotions and language are considered in this review. The fundamental mechanisms of cognition include interactions between bottom-up and top-down signals. The modeling of these interactions since the 1960s is briefly reviewed, analyzing the ubiquitous difficulty: incomputable [...] Read more.
Conscious and unconscious brain mechanisms, including cognition, emotions and language are considered in this review. The fundamental mechanisms of cognition include interactions between bottom-up and top-down signals. The modeling of these interactions since the 1960s is briefly reviewed, analyzing the ubiquitous difficulty: incomputable combinatorial complexity (CC). Fundamental reasons for CC are related to the Gödel’s difficulties of logic, a most fundamental mathematical result of the 20th century. Many scientists still “believed” in logic because, as the review discusses, logic is related to consciousness; non-logical processes in the brain are unconscious. CC difficulty is overcome in the brain by processes “from vague-unconscious to crisp-conscious” (representations, plans, models, concepts). These processes are modeled by dynamic logic, evolving from vague and unconscious representations toward crisp and conscious thoughts. We discuss experimental proofs and relate dynamic logic to simulators of the perceptual symbol system. “From vague to crisp” explains interactions between cognition and language. Language is mostly conscious, whereas cognition is only rarely so; this clarifies much about the mind that might seem mysterious. All of the above involve emotions of a special kind, aesthetic emotions related to knowledge and to cognitive dissonances. Cognition-language-emotional mechanisms operate throughout the hierarchy of the mind and create all higher mental abilities. The review discusses cognitive functions of the beautiful, sublime, music. Full article
Show Figures

Figure 1

Figure 1
<p>Finding “smile” and “frown” patterns in noise, an example of dynamic logic operation: (<b>a</b>) true “smile” and “frown” patterns are shown without noise; (<b>b</b>) actual image available for recognition (signals are below noise, signal-to-noise ratio is between ½ and ¼, 100 times lower than usually considered necessary); (<b>c</b>) an initial fuzzy blob-model, the vagueness corresponds to uncertainty of knowledge; (<b>d</b>) through (<b>h</b>) show improved models at various steps of dynamic logic (DL) (equation A3 are solved in 22 steps). Between stages (<b>d</b>) and (<b>e</b>) the algorithm tried to fit the data with more than one model and decided that it needs three blob-models to “understand” the content of the data. There are several types of models: One uniform model describing noise (it is not shown) and a variable number of blob-models and parabolic models, which number, location, and curvature are estimated from the data. Until about stage (<b>g</b>) the algorithm “thought” in terms of simple blob models, at (<b>g</b>) and beyond, the algorithm decided that it needs more complex parabolic models to describe the data. Iterations stopped at (<b>h</b>), when similarity (equation A1) stopped increasing.</p>
Full article ">Figure 2
<p>Generated data; object index is along vertical axes and situation index is horizontal. The perceptions (data samples) are sorted by situation index (horizontal axis); this makes visible the horizontal lines for repeated objects.</p>
Full article ">Figure 3
<p>Data, same as <a href="#brainsci-02-00790-f002" class="html-fig">Figure 2</a>, randomly sorted by situations (horizontal axis), as available to the DL algorithm for learning.</p>
Full article ">Figure 4
<p>DL situation learning. Situation-model parameters converge close to true values in three steps.</p>
Full article ">Figure 5
<p>Errors of DL learning are quickly reduced in 3–4 steps, iterations continue until average error reached predetermined threshold of 0.05 (10 steps).</p>
Full article ">Figure 6
<p>Correct associations are near 1 (diagonal, except noise) and incorrect associations are near 0 (off-diagonal).</p>
Full article ">Figure 7
<p>The dual-model architecture, modeling interaction of language and cognition. Learning of cognition is grounded in experience and guided by language. Learning of language is grounded in the surrounding language at all hierarchical levels.</p>
Full article ">
120 KiB  
Article
Algorithmic Relative Complexity
by Daniele Cerra and Mihai Datcu
Entropy 2011, 13(4), 902-914; https://doi.org/10.3390/e13040902 - 19 Apr 2011
Cited by 20 | Viewed by 9127
Abstract
Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information [...] Read more.
Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence) found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Pseudo-code to generate an approximation <math display="inline"> <semantics> <mrow> <mi>C</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>⊕</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math> of the cross-complexity <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>⊕</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math> between two strings <span class="html-italic">x</span> and <span class="html-italic">y</span>.</p>
Full article ">
Back to TopTop