<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI Archives - The Miskatonian</title>
	<atom:link href="http://www.miskatonian.com/tag/ai/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.miskatonian.com/tag/ai/</link>
	<description>Instinct &#38; Intelligence</description>
	<lastBuildDate>Sun, 26 Jan 2025 21:02:15 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>The Dreamer and the Machine: human and AI in the mirror of consciousness</title>
		<link>http://www.miskatonian.com/2024/12/11/the-dreamer-and-the-machine-human-and-ai-in-the-mirror-of-consciousness/</link>
					<comments>http://www.miskatonian.com/2024/12/11/the-dreamer-and-the-machine-human-and-ai-in-the-mirror-of-consciousness/#respond</comments>
		
		<dc:creator><![CDATA[Narmin Khalilova]]></dc:creator>
		<pubDate>Wed, 11 Dec 2024 19:31:36 +0000</pubDate>
				<category><![CDATA[All Articles]]></category>
		<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Carl Jung]]></category>
		<category><![CDATA[consciousness]]></category>
		<category><![CDATA[dreams]]></category>
		<category><![CDATA[experience]]></category>
		<category><![CDATA[Heidegger]]></category>
		<category><![CDATA[Henri Bergson]]></category>
		<category><![CDATA[human]]></category>
		<category><![CDATA[Husserl]]></category>
		<category><![CDATA[intuition]]></category>
		<category><![CDATA[machine]]></category>
		<category><![CDATA[will]]></category>
		<guid isPermaLink="false">https://www.miskatonian.com/?p=34867</guid>

					<description><![CDATA[<p>Preface In his speech on the dream of life, Alan Watts narrates of life as a self-created dream, an exploration what it means to exist. Imagine, he says, that you could dream any life you wanted. At first, you would fill this dream with endless pleasures and satisfaction, creating a world of your deepest desires....</p>
<p>The post <a href="http://www.miskatonian.com/2024/12/11/the-dreamer-and-the-machine-human-and-ai-in-the-mirror-of-consciousness/">The Dreamer and the Machine: human and AI in the mirror of consciousness</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><strong>Preface</strong></p>
<p>In his speech on the <em>dream of life</em>, Alan Watts narrates of life as a self-created dream, an exploration what it means to exist. Imagine, he says, that you could dream any life you wanted. At first, you would fill this dream with endless pleasures and satisfaction, creating a world of your deepest desires. But over time, the predictability of such perfection would wear thin. You would begin to wish for challenges, risks, and unknowns—anything to give your dream more meaning, depth, and richness. Eventually, you would dream your way into the life you are living now—a life full of uncertainties, struggles, and joys. This life, with all its imperfections, is not an accident but the result of a deeper will to explore, learn, and uncover who you are through experience. In this way, life itself is both the dreamer and the dream, unfolding through each act of creation and discovery.</p>
<p>This can be seen as an allegory for AI, which is like a dream of data, where endless possibilities are programmed but lack true consequences. In the dream of life, we wish for things without knowing their outcomes. By acting and experiencing, we gather knowledge and learn something about ourselves. This is like data collection in real life—we gain information from living and acting and call it <em>knowledge about things</em>. For this reason, we might say there are no fixed rules in the game of life. AI, on the other hand, interacts with Big Data—a representation of life rather than a lived experience.</p>
<p>Once, I listened to Alan Watts’ speech on the dream of life and felt its profound metaphorical connection to life itself. While immersed in my exploration of AI, his words resurfaced in my mind, provoked by the resemblance I noticed in the AI-generated images I created. At first, these images appeared blurry and dreamlike, which stirred my curiosity and drew me back to the ideas in his speech. In this paper, I aim to compare the functionalities of humans and AI to gain a deeper understanding of their distinctions and intersections. I will explore the following question:</p>
<p>&#8220;How does comparing human and AI decision-making deepen our understanding of human consciousness?”</p>
<h3><strong> </strong><strong>The Holistic Human Experience</strong></h3>
<p>In my previous papers on philosophical anthropology, I describe human experience as a system shaped by three interwoven dimensions: <em>the physical, mental, and emotional bodies.</em> Drawing from Jungian psychology, I see these bodies as working together, constantly influencing and transforming each other. This integration creates our unique sense of self and connects us to the world. <strong>The physical body</strong> (1)  roots us in sensory reality, allowing direct interaction with the external world and grounding our presence. <strong>The mental body</strong> (2) is the realm of thought, reason, and consciousness, where we construct meaning and engage with abstract ideas, framing our experiences in coherent narratives. Finally, <strong>the emotional body</strong> (3) encompasses the subtleties of feeling, intuition, and relational awareness, infusing our experiences with depth and fostering empathy.</p>
<p>In essence: 1) My physical body feels the need to drink, signaling 2) my mental body to think of water. The mental body, drawing on past experiences of thirst (retrieving information from memory), triggers 3) an emotional response—discomfort or even despair—which ultimately leads back to 1) a physical action of pouring water.</p>
<p>These bodies don’t act in a fixed order but as a connected whole. In moments of extreme affect, the mental body might even be bypassed, leaving only survival instincts. I believe we often rely too much on the mental body and overlook the physical body’s deeper, instinctual wisdom, but that is another topic.</p>
<p>Let me summarize and verify my conclusions through the lens of cognitive psychology and neuroscience to ensure clarity and avoid potential misconceptions. The act of pouring water begins with the body’s innate signals. When dehydrated, receptors in the hypothalamus detect an imbalance and trigger thirst. This physical alert signals the brain, initiating a cascade of processes to resolve the need. The brain’s sensory systems focus attention on the discomfort, while the mental body identifies the problem and searches for a solution, drawing on past experiences stored in the hippocampus. The prefrontal cortex evaluates options, while the amygdala adds emotional urgency to prioritize action. Emotion amplifies motivation, engaging the limbic system and basal ganglia to drive the physical act. Signals from the prefrontal cortex guide the motor cortex and cerebellum to execute precise movements, translating intention into action. As water is consumed, the hypothalamus detects restored hydration, and the brain’s reward pathways release dopamine, reinforcing the behavior. This sequence illustrates the seamless interplay of the physical, mental, and emotional bodies, working together as a holistic system where signals, memory, and intention converge to resolve needs and shape future actions.</p>
<p><strong>Will, Energy, Intention, and Intuition </strong></p>
<p>Human consciousness does not end at this simple need-satisfaction cycle. Human decision-making is similar to observing the behavior of photons—it depends on intent and context. Photons exist as both waves and particles, with their behavior depending entirely on the moment the observer chooses to look. Instead of simply giving AI an impulse with precise instructions to generate a stunning image of a beautiful landscape (Phenomenologically and within the framework of continental philosophy), as Kant suggests, we are intuiting the very essence of our reality in each moment. A decision’s value as a “wave” or “particle” only emerges after the will and intention have been directed into precise action.</p>
<p>I see here three necessary components:</p>
<ol>
<li><strong> Will </strong>is the driving force, the raw power that connects a person to the universe.</li>
<li><strong> Intention </strong>shapes and directs that force, turning it into purposeful action.</li>
<li><strong> Energy </strong>is the resource that sustains both will and intention, enabling one to perceive and act with clarity and power.</li>
</ol>
<p>For example, deciding to start a new project involves the will to overcome hesitation, the energy to plan and execute, and the intention to focus these efforts on achieving a specific goal. Together, they form the foundation of purposeful and effective decision-making.</p>
<p>Husserl&#8217;s intentionality inherently connects to will and intention, as it reflects the directed nature of human consciousness. Will emerges as the force driving our focus, while intention shapes and directs this force toward purposeful action. Together, they align the mind’s &#8220;aboutness&#8221; with meaningful outcomes, grounding decisions in context, values, and purpose. Unlike AI, which operates without true intention or will, human actions arise from a conscious engagement with the world, where decisions are not just responses but deliberate expressions of inner purpose.</p>
<p><strong>Intuition</strong> occupies a special and distinct place in the philosophical exploration of human cognition and existence. It represents a profound ability to grasp knowledge or make decisions instinctively, bypassing conscious reasoning. Rooted in the subconscious, intuition draws from accumulated experiences, emotions, and patterns, offering immediate insights that often feel inherently &#8220;true.&#8221; Philosophically, it embodies a bridge between the rational and the ineffable, allowing access to layers of understanding beyond logic or analysis.</p>
<p>Intuition holds particular importance in creativity, ethical decision-making, and moments of existential clarity. By engaging the emotional and subconscious dimensions of the human mind, it provides a unique way of navigating the world, complementing logical thought while transcending its limitations. This makes intuition not just a cognitive tool but a vital philosophical concept, illuminating the deeper, interconnected aspects of human experience.</p>
<p>Here, I propose the addition of a fourth body: <em>the transcendent body</em>. It is the first-person space where intentions form before they manifest in physical, emotional, or mental acts. This transcendence is where human decision-making begins—beyond the realms of data and computation. The transcendent body can indeed be likened to the existential or spiritual aspects of human life, as it resides in the abstract realm where meaning, purpose, and being converge. It is the space of pre-decision, where intentions form before they manifest in thought or action. This dimension connects us to something beyond the physical, psychological or emotional, grounding our existence in a deeper reality that cannot be fully measured or explained. It is here that we encounter the ineffable—the essence of what it means to be human, to seek, to question, and to create. AI, by contrast, operates without such a transcendent dimension. It processes data and produces responses, but its decisions lack the deeper existential grounding of human will and intention. In Alan Watts’ words, the intuitive, unpredictable dream of human life against the predetermined &#8220;dream&#8221; of AI.</p>
<p><strong>How does AI work?</strong></p>
<p>AI’s decision-making and action follows a structured sequence rooted in data processing and algorithmic evaluation. It begins with <em>input collection</em>, where raw information is gathered through sensors, cameras, or connected systems. These inputs form the foundation of the process, ranging from visual data to environmental information or text streams.</p>
<p>AI processes data in a structured sequence. First, the data is cleaned and transformed into a usable format through <em>preprocessing and feature extraction</em>, identifying key patterns for analysis. The system then applies its <em>trained model</em> to evaluate these features, generating predictions or classifications that guide <strong><em>decision-making</em></strong>. Once a decision is made, it is <em>executed</em> either physically, via robotics, or virtually, through software systems. The process concludes with <em>feedback and adjustment</em>, as the system evaluates outcomes and refines its future performance in a continuous loop. This method ensures precision and efficiency, with data flowing through clear stages of <em>analysis, action, and improvement.</em></p>
<p><strong>Comparison</strong></p>
<p>The comparison between human and AI functionalities highlights both similarities and key differences in how they process inputs, make decisions, and execute actions. While humans and AI gather inputs from their environments, humans translate needs into conscious awareness through sensory systems, whereas AI processes raw data without awareness or subjective understanding. AI relies entirely on <em>human input</em>. It processes human impulses but cannot generate its own. Can AI ever develop an impulse through data alone? This question circles back to the question of free will. How do humans decide on an impulse, and how does AI respond?</p>
<p>In phenomenology, Husserl&#8217;s concept of intentionality refers to the mind&#8217;s inherent &#8220;aboutness,&#8221; where consciousness is always directed toward something—a thought, object, or experience. This intentionality ties human decision-making to a deeper context, as actions are imbued with meaning, purpose, and an understanding of their relational significance. Humans not only act but comprehend the &#8220;why&#8221; and &#8220;for whom&#8221; of their choices, informed by lived experience and existential reflection. In contrast, AI lacks intentionality in this phenomenological sense; its actions are not &#8220;about&#8221; anything but are merely responses to data processed within predefined parameters. AI&#8217;s decisions, while functional, are devoid of the relational and existential depth that characterizes human intentionality and meaningful action.</p>
<p>Humans translate needs into conscious awareness, relying on a dynamic interplay of physical, mental, and emotional dimensions, where will, power, and intention create purposeful actions driven by intuition, emotions, and context. AI, however, processes raw data without awareness, relying on programmed objectives and optimization criteria. Human feedback fosters learning and growth, while AI’s feedback loops refine performance without experiential depth. Humans act as holistic systems, integrating creativity and ethics, while AI operates in linear, algorithmic stages, devoid of subjective experience.</p>
<p>Deterministic AI systems face profound ethical challenges in decision-making, particularly in reinforcing biases and operating without transparency. The IEEE&#8217;s Ethically Aligned Design points out that AI trained on biased data risks replicating systemic inequalities, especially in sensitive areas like hiring or law enforcement. Generative AI introduces further dilemmas, such as the creation of deepfakes and misinformation, threatening public trust and safety. These limitations emphasize the necessity of ethical oversight and human accountability to ensure AI serves society responsibly.</p>
<h3><strong>Personal reflection</strong></h3>
<p>It is intriguing that decision-making that I want to focus on emerges as a shared point of comparison between humans and AI in this exploration. However, I believe the deterministic nature of AI should not be fully trusted from an ethical standpoint.</p>
<p>When we turn to AI, we must recognize that true human likeness requires more than programming or computation; it demands a holistic, transcendental body—something beyond our <em>mental, physical or emotional</em> control or understanding.</p>
<p>It is also curious how discussions about AI inevitably lead us to questions about ourselves. I wonder if this stems from an over-reliance on our intelligence, viewing it as superior, only to now feel unsettled as we measure ourselves against a tool designed to enhance it. Perhaps it is time to recognize that we are far more than our functionality, societal roles, or intelligence. AI, in a way, mirrors our current state—where we prioritize functioning over simply being. This is a stage we have become so accustomed to that unlearning it seems nearly impossible. I cannot say if this reflection will serve as a reminder for everyone, but I do know that whatever unsettles or frustrates me ultimately reveals something about myself. This reflection forces us to question what it means to truly know, decide, and create—and to acknowledge the enduring mysteries of human consciousness.</p>
<p><strong>Conclusion</strong></p>
<p>In conclusion, the exploration of human and AI functionalities reveals not just differences in processing, decision-making, and action, but also deeper insights into what defines us as humans. While AI operates with precision, relying on deterministic algorithms and external programming, it lacks the holistic integration of will, intention, and transcendence that shapes human experience.</p>
<p>AI mirrors our current state of prioritizing functionality over being, challenging us to confront our reliance on intelligence and efficiency. Yet, this comparison serves as a reminder that we are far more than our cognitive capacities or societal roles—we are beings of creativity, intuition, and interconnected dimensions. Reflecting on AI compels us to reexamine our own nature, urging us to balance our abilities to function with the deeper essence of simply existing.</p>
<p>As we create ever more sophisticated AI, we must also look inward, recognizing that what sets us apart is not just our ability to think or act, but our capacity to exist holistically—to dream, create, intuit, and transcend.</p>
<p>&nbsp;</p>
<p>1. Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper &amp; Row, 1962.</p>
<p>2. Jung, Carl Gustav. The Archetypes and the Collective Unconscious. Translated by R. F. C. Hull. Princeton: Princeton University Press, 1981.</p>
<p>3. Carter, Rita. 2019. The Human Brain Book: An Illustrated Guide to Its Structure, Function, and Disorders. London: DK.</p>
<p>4. Schopenhauer, Arthur. The World as Will and Representation. Translated by E. F. J. Payne. New York: Dover Publications, 1969.</p>
<p>5. Sartre, Jean-Paul. Being and Nothingness: An Essay in Phenomenological Ontology. Translated by Hazel E. Barnes. New York: Washington Square Press, 1992.</p>
<p>6. Floridi, Luciano. The Philosophy of Information. Oxford: Oxford University Press, 2011.</p>
<p>7. Husserl, Edmund. “Philosophy as Rigorous Science.” In Phenomenology and the Crisis of Philosophy, edited and translated by Quentin Lauer, 71–147. New York: Harper &amp; Row, 1965.</p>
<p>8. Searle, John. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, no. 3 (1980): 417–457. <a href="https://doi.org/10.1017/S0140525X00005756">https://doi.org/10.1017/S0140525X00005756</a>.</p>
<p>9. Bergson, Henri. “The Perception of Change.” In Creative Evolution, translated by Arthur Mitchell, 266–284. New York: Henry Holt, 1911.</p>
<p>10.“Ethical Considerations in Artificial Intelligence Systems.” IEEE White Paper. Accessed November 2024. <a href="https://standards.ieee.org">https://standards.ieee.org</a>.</p>
<p>&nbsp;</p>
<p>The post <a href="http://www.miskatonian.com/2024/12/11/the-dreamer-and-the-machine-human-and-ai-in-the-mirror-of-consciousness/">The Dreamer and the Machine: human and AI in the mirror of consciousness</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.miskatonian.com/2024/12/11/the-dreamer-and-the-machine-human-and-ai-in-the-mirror-of-consciousness/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>&#8220;Posting&#8221; and The Meaning Behind Digital Art</title>
		<link>http://www.miskatonian.com/2024/04/17/posting-and-the-meaning-behind-digital-art/</link>
					<comments>http://www.miskatonian.com/2024/04/17/posting-and-the-meaning-behind-digital-art/#respond</comments>
		
		<dc:creator><![CDATA[Joe Nally]]></dc:creator>
		<pubDate>Wed, 17 Apr 2024 10:30:13 +0000</pubDate>
				<category><![CDATA[All Articles]]></category>
		<category><![CDATA[Articles]]></category>
		<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[art]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[digital]]></category>
		<category><![CDATA[digital art]]></category>
		<category><![CDATA[Edward S. Herman]]></category>
		<category><![CDATA[John Berger]]></category>
		<category><![CDATA[Marshall McLuhan]]></category>
		<category><![CDATA[Norberto Bobbio]]></category>
		<category><![CDATA[Paul Schuette]]></category>
		<category><![CDATA[posting]]></category>
		<guid isPermaLink="false">http://www.miskatonian.com/?p=2347</guid>

					<description><![CDATA[<p> Artificial Intelligence also challenges how we make “art” and what is ultimately considered “art” in the two worlds. We need AI because it will accomplish the minor tasks that we need to move on to the bigger and more sophisticated ones. This also means that we shouldn’t be putting huge effort into the little things that AI can already do in a minute. However, we should also have the mandatory skill sets and practical knowledge of application in case AI breaks down (and it will). We have to understand what exactly digital art is and what place it is in the world of the internet.</p>
<p>The post <a href="http://www.miskatonian.com/2024/04/17/posting-and-the-meaning-behind-digital-art/">&#8220;Posting&#8221; and The Meaning Behind Digital Art</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In our waking hours, there is a split between the <em>meat</em> world and the <em>net </em>world.</p>
<p>I know. It’s crazy to apply this binary divide and assume technology is the encompassing power that has over our lives. …But that’s the truth. Without the internet, the current world governments will fall apart.</p>
<p>The only world that matters is the net world; its dependence is an inferior “meat” reality. This, though, is radical on its own because it denies humanity as nothing more than a phase called the “Anthropocene.” It assumes human domination and influence are a fad and that we as humans must get ready for a post-Anthropocene future where we will be replaced by robots or non-human (possibly extraterrestrial) life. This talk is popular among transhumanist and nihilist talking heads like Kohei Saito, Timothy Morton, and Eugene Thacker.</p>
<p>If we have to embrace this human annihilation, we have to accept that the internet is the only tool for our success as human creatives. <a href="https://www.pilleater.com/p/why-we-need-artificial-intelligence" rel="">Artificial Intelligence</a> also challenges how we make “art” and what is ultimately considered “art” in the two worlds. We need AI because it will accomplish the minor tasks that we need to move on to the bigger and more sophisticated ones. This also means that we shouldn’t be putting huge effort into the little things that AI can already do in a minute. However, we should also have the mandatory skill sets and practical knowledge of application in case AI breaks down (and it will). We have to understand what exactly digital art is and what place it is in the world of the internet.</p>
<p>Art in the meat world is defined by the five senses; sight, smell, hearing, taste, and touch. An artistic discipline is created around one of these senses. The meaning of art is to be expressive in its selected medium. In the net world, being expressive is quite difficult. People can’t “touch,” “taste.” or “smell” something on the internet (not yet, anyway). We can, however, “see” and “hear” art and music on the internet! The internet is like a virtual reality video game that we get lost in, so we may have a fake simulation of those five senses.</p>
<p>John Berger <a href="https://www.pilleater.com/p/digital-object-theory" rel="">pointed out</a> in his 1972 documentary <em>Ways of Seeing</em> that art is becoming replicated and cloned, and no original “experience” can be enjoyed. Berger only recited Walter Benjamin’s 1935 commentary, “The Work of Art in the Age of Mechanical Reproduction,” giving it a twist for the Boomer generation. Benjamin’s “<em>aestheticization of politics</em>” was a focus found within his interest of Trotskyism and (possibly) Gramscism. The idea of a “culture war” was a new concept in the 1930s, and the focus that “<em>the medium is the message</em>” became a long-running fad up until the 1990s. Today, the global governments use the tool of mass media so they can “manufacture consent” to push political agendas, as noted by Noam Chomsky and Edward S. Herman to further debunk any egalitarian notion of the happy “global village” that Marshall McLuhan parades about. It is a useless pursuit to uphold a “permanent revolution” in the 21st century.</p>
<p>We are coming to a conclusion about the political left and right divide and realize that we have entered a new stage of <em>disinformation</em>, where only individuals can “consent” to their influences and actions through their own curated and fringe media outlets. These outlets, however, all relate to the same message of the net world, in that liberal posting and sharing is the only thing that matters. Norberto Bobbio was right to assume the rise of consensual identity politics around the terms of left or right as a subculture, where one can say “I am left-wing” just like the other can say “I am right-wing.” However, this also concludes the divide, where both umbrella categories are becoming <em>one</em> subcultural consumer class.</p>
<p>Art cannot have an ideological basis. Expression, beauty, and logic lack the political. Ethics and narratives are one thing, but the aestheticization of these concepts is another<em>. </em>Religion and ideology provide a purpose to produce and a goal to achieve. Devaluing expression, beauty, and logic implies that art is not real. In the net world, we can protect these three concepts by understanding what it means to “post.”</p>
<p>In the early 2000s, <a href="https://www.pilleater.com/p/how-ytmnd-taught-me-how-to-write" rel="">YTMND</a> allowed users to post a “YTMND,” or microsite, that included images, sound, and text. We can see, hear, and read a YTMND. Popular trends in the YTMND community are called “<em>fads</em>,” which would later become “memes” in the net world. The eventual creation of YouTube meant that users could upload entire videos and that YTMND became an obsolete medium. “Posting” ranges from whatever is uploaded on YouTube, Facebook, Twitter (now “X”), Instagram, Discord, Telegram, 4chan, or any obscure web forum or chat. Our options are no longer diverse, and all posting platforms have been compromised.</p>
<p>What we post is what matters. It could be a picture, a video, a comment, a cartoon, a radio show, a game, music, or anything that gets stored in the Internet Archive. The inflation setting that every possible human being on the internet implies that certain art is no long longer of value because there are too many channels of “content” creation and/or neglected spaces and channels of posting that people are unaware of. Imagine a Facebook user of 900 “friends” who posts his original memes made in Pixlr, and yet nobody “likes” his daily uploads. This is because everyone left Facebook for another platform (like Instagram being more picture-friendly), and there are too many users doing the same thing on the platform. Or it&#8217;s that this user is trapped in an echo chamber but producing for his therapeutic gratification. We have entered a new stage of producing without an audience, in that the posting that is of real value or is good art is being neglected because we are unaware of these private (yet public) isolated posters. I call this phenomenon “<em><a href="https://www.pilleater.com/p/this-is-dark-data" rel="">dark data</a></em>” and explain it as “the urge to produce without meaning.”</p>
<p>On the internet, artistic intent is judged solely by the post. What is posted matters. This post may gain likes, “retweets,” and publicity from mainstream outlets. The algorithms lift up the dopamine hits that keep people addicted to the net world. Posting is judged on this hypothetical numbers game and mass “fake by democracy” consensus.</p>
<p>With that being said, digital art can only be conceived through a post. Digital art is judged on the basis if it’s a “shitpost” or not. Or, the user is going through a series of “Y-posting,” where “Y” represents any subject, slang, or activity. One could be “<em>Juche</em>posting,” “<em>Goon</em>posting,” “<em>Fed</em>posting,” and so on. What exactly is posted is judged on this fleeting merit. If digital art is understood by what is posted, we start to question the entire ideology of the internet. Everyone has equal access and power to post anything they want and shall be judged on what is uploaded to the server.</p>
<p>Posting implies that the art is finished before it can exist. Digital art is not done in a live setting (as we can’t rely on live streams for everything). Art has to be made and finally digitized for the internet. And what can be art is another thing. Everything is up for grabs on the internet, even if it’s not intended as art. Collage is the constant medium of choice, and what samples and video clips we take is now ours to create original art from. As said once by Pablo Picasso, “Bad artists copy. Good artists <em>steal</em>!”</p>
<p>We are reminded what Paul Schuette wrote in his quick instruction manual, <em>Demystifying Max/MSP</em>. To Schuette,</p>
<blockquote><p>“The larger issue here is that outright stealing is an accepted part of programming culture. Pieces of code are borrowed, share and ripped from other people, and this is what you are expected to do. The best programmers working right now never start with an entirely blank screen. The problem is that people who are new to writing computer code sometimes feel a tinge of morality running up their spine when they ‘borrow’ a piece of code for the first time.”</p></blockquote>
<p>The realm of digital art is a cycle of inspiration, influence, and mimicry. What we see is up for grabs. What we take is now ours. And what we post from it is how we conceive new digital art. We could be under a parasocial “relationship,” a reality of solipsism, that what we scroll through on “social” platforms influences us directly as artists and intellects. It’s world without Chicago-style citation!</p>
<p>To find the origin of our favorite digital art, we follow the breadcrumb trail home and realize that, like the Big Bang, a random post conceived a new world. The origin of digital art came from posting, and to neglect that reality means we neglect the net world itself.</p>
<p>It’s not that we should think in terms of being in the Anthropocene or embrace a post-anthropocene future. We realize that we a knee deep into techno-singularity. And how we can understand our current reality is rather a reflection of the delusions of undeveloped transhumanism and the cartoon prisons we irrationally embrace.</p>
<p>We cannot discuss digital art unless we talk about the<em> action</em> of posting and how it <em>influences</em> art back into the meat world. Posting is what makes us valued in the art industry. The truth is that we can’t exist as artists without a net world, no matter how much we try to negate it.</p>
<p>The solution means posting without an audience. We don’t need isolated likes and algorithm boosting to be valued. Because there is no value within art, and to exist without purpose means we are truly free. To produce for the sake of it means we are no longer bound to the limits of what we have to produce for “society” and for profit. Artists are a type of malware that works against what the machine intended. No is a “failed artist,” as they love to assume one can either “win” or “lose” in a market of flashy TikTok pornography.</p>
<p>Every time we post, we take up space on the internet. And if we take up enough space, we own it. It’s not done by “culture war” or any persuasion of political powers. All we have to do is post for the sake of it, even if the post is shit.</p>
<p>That’s what it means to be a digital artist.</p>
<p>The post <a href="http://www.miskatonian.com/2024/04/17/posting-and-the-meaning-behind-digital-art/">&#8220;Posting&#8221; and The Meaning Behind Digital Art</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.miskatonian.com/2024/04/17/posting-and-the-meaning-behind-digital-art/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Detritus of this Technological Moment</title>
		<link>http://www.miskatonian.com/2024/03/07/the-detritus-of-this-technological-moment/</link>
					<comments>http://www.miskatonian.com/2024/03/07/the-detritus-of-this-technological-moment/#respond</comments>
		
		<dc:creator><![CDATA[Duncan Reyburn]]></dc:creator>
		<pubDate>Thu, 07 Mar 2024 16:18:38 +0000</pubDate>
				<category><![CDATA[All Articles]]></category>
		<category><![CDATA[Articles]]></category>
		<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[detritus]]></category>
		<category><![CDATA[Friedrich Junger]]></category>
		<category><![CDATA[garbage]]></category>
		<category><![CDATA[garbage disposal]]></category>
		<category><![CDATA[IKEA]]></category>
		<category><![CDATA[John Gall]]></category>
		<category><![CDATA[Marshall McLuhan]]></category>
		<category><![CDATA[systemantics]]></category>
		<category><![CDATA[technological]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[truck]]></category>
		<category><![CDATA[wagon]]></category>
		<guid isPermaLink="false">http://www.miskatonian.com/?p=2203</guid>

					<description><![CDATA[<p>To know what this means, to know what this feels like, walk into a dark room and, if you’re not stuck in a phase of rolling blackouts as I often am here in South Africa, flick that light switch. Day erupts inside a room, and night is chased into its various corners. The entire environment is altered. McLuhan adopts this as a metaphor for what any and every technology does. You don’t just get what the technology is intended for; you get a whole new world.</p>
<p>The post <a href="http://www.miskatonian.com/2024/03/07/the-detritus-of-this-technological-moment/">The Detritus of this Technological Moment</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400;">There’s no getting away from waste. I mean literal garbage, although the idea has a deeper metaphysical resonance. For centuries, people used wagons to take away waste, and by the 1920s, open-top trucks were used for the same thing. This mechanical turn caused problems of an odorous sort. However, so covered vehicles were soon preferred. Then, in 1937, a man with the darkly serendipitous name of George Dempster invented the Dempster-Dumpster system, which allowed wheeled waste containers to be mechanically tipped into a truck. This is a familiar sight across the globe today. It was because of Dempster’s word for those waste containers that the word </span><i><span style="font-weight: 400;">dumpster</span></i><span style="font-weight: 400;"> entered the English language.</span></p>
<p><span style="font-weight: 400;">Garbage disposal was always going to be a question. To be human is to use things. And it’s nearly impossible to use everything in its entirety—although the paleolithic man would have a thing or two to say against this presumption. Moreover, pretty much everything is accompanied by offcuts and byproducts. When the Industrial Revolution went to work on our conceptions of the world and of things in it, the question of waste ballooned. There was so much more of it. Dumpster trucks were, given this eventuality, a fairly sensible-seeming answer. And yet, this simple invention was less a solution to a problem than it was and remains the mother of several other problems. Here I borrow from Boromír: One does not merely </span><i><span style="font-weight: 400;">invent</span></i><span style="font-weight: 400;"> anything; to invent a thing is to remake the world. </span></p>
<p><span style="font-weight: 400;">The explicit goal of the invention of the garbage truck and its subsequent so-called improvements was, as will prove true for any new invention, not the only consequence. As soon as garbage trucks motored onto the world’s roads, systems needed to be created to manufacture them, supply chains needed to be arranged, skillsets needed to be combined to manage their manufacture, and so much more. As John Gall notes in his essential book </span><i><span style="font-weight: 400;">Systemantics</span></i><span style="font-weight: 400;">, a sort of rosetta stone book for so much accumulating nonsense in our time, “No matter what the ‘goal’ of the system, it immediately begins to exhibit system-behavior, that is, to act according to the general laws that govern the operation of all systems.”</span></p>
<p><span style="font-weight: 400;">Systems do not operate like machines. They operate like the world, as a more or less messy configuration of various kinds of order and disorder, roughly thrown together and fragile. Systems attempt order but tend to produce its other. Attempts to control things using technologies contribute significantly to robbing us of meaning. To introduce a new machine into the world is not to make the world more mechanical but to radically destabilize the equilibrium of the yinging-and-yanging of things. Because it is,, in essence,, artificial, this destabilization often tends towards inhuman proportions. </span></p>
<p><span style="font-weight: 400;">And so it is with garbage removal. Once upon a time, there was a single problem, but with the invented answer and the subsequent system within which that answer would need to operate, many further problems were also produced. “In the case of Garbage Collections,” writes Gall, “the original problem could be stated briefly as ‘What do we do with all this garbage?’ After setting up a garbage-collection system, we find ourselves faced with a new Universe of Problems.” To further Gall’s logic, I would say the question now becomes: ‘What do we do with all this garbage removal?’ </span></p>
<p><span style="font-weight: 400;">The new Universe of Problems includes “questions of collection, bargaining with the garbage collector’s union, rates and hours, collection on very cold or rainy days, purchase and maintenance of garbage trucks, millage and bond issues, voter apathy, regulations regarding separation of garbage from trash,” and, to quote Slavoj Žižek, “so on and so on. Sniff.” As civil engineers tell us, roads disintegrate more rapidly under the weight of trucks, too, which is to say that the effects of the technology supersede even the system itself. Nothing is a world unto itself. </span></p>
<p><span style="font-weight: 400;">Now, and by now, I mean right now, we are not just trying to remove rubbish but trying to figure out what to do when employees are absent or how to reschedule late collections. We, and by we, I mean some nebulous collective that doesn’t include me except in the sense that it must deal with my own garbage, are trying to figure out better and worse ways to manufacture dumpsters. Inventions beget inventions and technologies must, as Friedrich Jünger noted so long ago, submit themselves to the logic of perfection and rationalization and the distribution of poverty.</span></p>
<p><span style="font-weight: 400;">No invention is without existential implications. Gall notes what anyone who works in a large system knows: “Large systems usually operate in failure mode.” We do not merely solve problems; we create misery. Thomas Sowell’s insight needs to be remembered: </span><i><span style="font-weight: 400;">there are no solutions; there are only trade-offs. </span></i><span style="font-weight: 400;">Before I get sidetracked into discussing systems theory, however, I want to anchor the essence of this discussion in the invention of the garbage truck itself, and so, by implication, deal with the nature of technological extensions in general. Certain principles around technology that we find in the world of garbage and garbage removal apply more widely. If we get what’s happening with dumpster trucks, we stand some chance of understanding our present technological moment, in which various novel technologies are becoming the talk of the town: new ways to make vaccines, new medical technologies, brain-computer interfacing through implants, AR goggles, new developments in AI, and more. Despite the hesitations and warnings of so many sci-fi authors, techno-utopians are busy building an abyss to stare into. They are building a monster to imitate.</span></p>
<p><span style="font-weight: 400;">Dempster was no doubt proud of what he had done. Without question, his invention was clever. But I doubt he was thinking about the possible troubles that would be connected to the garbage disposal industry in the future while he was considering how to get the engineering aspects of his invention right, even if he was building on a previous idea. Having no access to him or his thinking about this at the time, I may be doing terrible justice to the man himself. And yet, by simply observing people now, the very species that quite a few of my readers belong to, I think my speculation is at least somewhat justified.</span></p>
<p><span style="font-weight: 400;">Let’s consider, for one wild moment, some or other </span><i><span style="font-weight: 400;">technoptimist</span></i><span style="font-weight: 400;">—this is what I will call the person who sees only the upsides of technology and not its side-effects and downsides. Let’s think about some enthusiastic supporters of technological innovations. This character, we shall imagine, is crazy about what science can do. He has ‘Follow the Science’ tattooed in different equally kitsch fonts on several parts of his anatomy. He is unquestionably in favor of whatever medical-industrial product you might want to give him, even if it is a cure for a disease that doesn’t exist and even if it might be worse than any disease. He has all the latest tech, even if it’s not particularly useful. His phone is all apps. His mind is full of pings and notifications, current things, and distractions. He eats the bugs, he wears the AR goggles, and he’s on the Neuralink trial waiting list. “Just look at amazing these things!” he screams obscenely loudly at anyone who walks past his cubicle at work. His colleagues look at him askew when he does this. No one likes to be yelled at. Who is this guy?</span></p>
<p><span style="font-weight: 400;">He is, without a doubt, a moron. What makes him a moron is his unquestioning obeisance to the immediately obvious. He, my little fictional conceit, sees any given thing as if it exists apart from a world of meanings and consequences. He sees only the thing itself and so cannot even see the thing itself. Because, after all, nothing, in reality, can exist in such an unworlded state. Nothing can, in reality, be separated from its environment. </span></p>
<p><span style="font-weight: 400;">And yet, the tragic stupidity of this technoptomist seems to me to be almost everywhere, albeit lurking in the secret machinations of normal people like you and me. Almost everyone is still likely, when asked what a garbage truck does, to respond that it removes garbage. The truck is the thing, not the vast, complex, and fragile system that came into being when it was put to work. And yes, this is true, but it is such a microscopic part of the truth that it may almost be said to be false. Schopenhauer was right about at least this one thing: the trouble we have is often less with imperfection than it is with distortions so dramatic that truth itself can start to appear to be false. </span></p>
<p><span style="font-weight: 400;">My point is this. The dumpster truck doesn’t just remove garbage; it also, perhaps on a much greater scale, </span><i><span style="font-weight: 400;">creates garbage. </span></i><span style="font-weight: 400;">Garbage trucks in accidents. Garbage removers are injured on duty because of the trucks. Noise pollution gathers around homes when garbage trucks are around. And garbage itself becomes easier to produce and discard when we carry with us the assumption that there’s someone who cares enough to deal with the stuff we discard. The thing does a million things, not just one thing. It doesn’t just help but contributes significantly to making the world worse. Because—and here we are back at systems theory—what the thing </span><i><span style="font-weight: 400;">does</span></i><span style="font-weight: 400;"> is</span> <span style="font-weight: 400;">what the thing </span><i><span style="font-weight: 400;">is</span></i><span style="font-weight: 400;">. Nothing is reducible to its intended function. Intention, if anything, becomes a sort of magic dust in our eyes, hiding the real truth of things from us. </span></p>
<p><span style="font-weight: 400;">Concerning technologies and technological development, it is still normal to hear people speak about technology as a neutral thing—one with merely </span><i><span style="font-weight: 400;">technical limits—</span></i><span style="font-weight: 400;">that can be used in good or bad ways, depending on the user. What is missed is what Marshall McLuhan spent his entire career trying to convince people of. And here I am, fighting the same losing battle he was. McLuhan wanted people to know that we misunderstand things profoundly when we view them through whatever content they happen to relay. People review products in terms of their technical specifications but fail to see them as the effects and effectors of entire systems. New technologies aren’t just insertions into the world. They transform the world.</span></p>
<p><span style="font-weight: 400;">To explain the meaning of his phrase “the medium is the message,” McLuhan uses the example of a lightbulb. The lightbulb, he declares, can be thought of as a medium </span><i><span style="font-weight: 400;">without </span></i><span style="font-weight: 400;">a message. In the lightbulb, we are not dealing with any specific content, although perhaps neon lights shaped into letters may suggest otherwise. The thing itself does what it is. It emits light. McLuhan suggests that the message of this medium is </span><i><span style="font-weight: 400;">total change. </span></i></p>
<p><span style="font-weight: 400;">To know what this means, to know what this feels like, to walk into a dark room and, if you’re not stuck in a phase of rolling blackouts as I often am here in South Africa, flick that light switch. Day erupts inside a room, and night is chased into its various corners. The entire environment is altered. McLuhan adopts this as a metaphor for what any and every technology does. You don’t just get what the technology is intended for; you get a whole new world. It’s not that one thing changes, everything does. And in this new world in which we can convince our minds and bodies that being out of step with the rhythms of nature is possible, we suffer from so much sleep deprivation. Our moods are affected, and our sense of reality itself is warped. Just because what technology does escapes our conscious awareness doesn’t mean it’s not doing what escapes our conscious awareness. Even that ever-so-helpful electric light manufactures garbage. When everything is artificially illuminated, a brand-new kind of darkness descends. The mind itself is darkened. It is left to its own oblivion.</span></p>
<p><span style="font-weight: 400;">We all know this somewhat implicitly when any new invention shows up on the world’s stage. With Apple’s Vision Pro recently unleashed, conscientious objectors to the creeping simulacrum, people like me know that </span><i><span style="font-weight: 400;">not embracing this tech </span></i><span style="font-weight: 400;">is likely to change little. The mere presence of the new technology alters the way the world feels and operates; the mere introduction of the tech into the mimetic fray of crowdthink is sufficient to bring about some slippery slope. We’ve been here before, after all. We live in a world in which this has already happened again and again and again. Technological solutions are garbage producers. The novelty is nice for a moment before the aftermath needs to be dealt with. What is made is, in all likelihood, not just a neat little gadget but a new tool for manufacturing mental illness. And if we’re typing prompts into AI in a frenzy, we’re not just experiencing the IKEA effect; we’re in the process of willfully adopting a form of brain damage as we allow certain aspects of consciousness and neurological function to atrophy.</span></p>
<p><span style="font-weight: 400;">We already sense that everything is now bathed in that AR interface and the world is already becoming a glut of entirely average images, even if we’re not eating the bugs and living in the pod and wearing the goggles and typing the prompts. Consider the environmental effects of our electricity dependence, including the fact that, as the work of Steven Gonzalez Monserrate shows, the airline industry’s carbon footprint has been superseded by the Cloud. But beyond this, the psychic and social consequences of our technologies are immense. Techheads will spend hours and hours of their precious existence trying to figure out new uses for all the new tech, and so they will become even more beholden to it. Time will be spent. Lives will be wasted. Makers of the pixellated worlds will be unmade by the very pixellated world they are creating.</span></p>
<p><span style="font-weight: 400;">There’s no getting away from garbage. But now we have increasingly moved into a world that tries, tries, and tries again to get away from reality itself as if what it is doing is cleaning up the garbage. We live now in the world imagined in Pixar’s </span><i><span style="font-weight: 400;">Wall-E</span></i><span style="font-weight: 400;">, even if it doesn’t quite look as bad. The real is now hidden by all the products and byproducts we’ve produced and consumed. And there’s no sign that we’re slowing down. It’s as if we’re addicted to generating the circumstances within which life itself will become unlivable.</span></p>
<p><span style="font-weight: 400;">We have reasons already not to add yet another tweak to our psychological ecosystems. We know that the more artificial the world becomes, the more we work against ourselves, against what is human. And yet, we’re still beguiled by the content of our technologies. We think that what we’re conscious of is going to rule our fate. But as Carl Jung has wisely said, it is what they are unconscious of that seals the fates of people. They’ll live in the merry belief that they’re removing garbage. They’ll rejoice in their ingenuity as one new invention after another renders the world increasingly banal and soulless.</span></p>
<p><span style="font-weight: 400;">Just as people might go around believing they can have life without suffering, some think we can have the tech without its side effects. My argument here is slightly different. </span><i><span style="font-weight: 400;">Technology is mostly its side effects.</span></i><span style="font-weight: 400;"> What it undoes outweighs what it does. The instrumental use of the tool—that is, its content—is perhaps even almost irrelevant when compared to its total impact on the world. Garbage trucks produce more garbage than they take away. And technology, far more often and in more ways than we tend to realize, takes away more than it gives.</span></p>
<p>The post <a href="http://www.miskatonian.com/2024/03/07/the-detritus-of-this-technological-moment/">The Detritus of this Technological Moment</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.miskatonian.com/2024/03/07/the-detritus-of-this-technological-moment/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How AI Could Save Liberal Education</title>
		<link>http://www.miskatonian.com/2023/12/13/how-ai-could-save-liberal-education/</link>
					<comments>http://www.miskatonian.com/2023/12/13/how-ai-could-save-liberal-education/#respond</comments>
		
		<dc:creator><![CDATA[Lee Trepanier]]></dc:creator>
		<pubDate>Wed, 13 Dec 2023 00:19:27 +0000</pubDate>
				<category><![CDATA[All Articles]]></category>
		<category><![CDATA[Articles]]></category>
		<category><![CDATA[Essays]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AlphaGo]]></category>
		<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[education]]></category>
		<category><![CDATA[future]]></category>
		<category><![CDATA[Humanities]]></category>
		<category><![CDATA[John O. McGinnis]]></category>
		<category><![CDATA[liberal]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Philip K. Dick]]></category>
		<category><![CDATA[plato]]></category>
		<category><![CDATA[STEM]]></category>
		<category><![CDATA[university]]></category>
		<guid isPermaLink="false">http://www.miskatonian.com/?p=1970</guid>

					<description><![CDATA[<p>Even before the publication of Stephen Marche’s Atlantic piece, “The College Essay is Dead,” there had already been discussions about AI writing programs like ChatGPT in the academy. But the past few months have seen a flurry of activity with college administrators calling emergency meetings, professors changing their assignments, and educators writing essays (some perhaps written by AI?) that range in reaction...</p>
<p>The post <a href="http://www.miskatonian.com/2023/12/13/how-ai-could-save-liberal-education/">How AI Could Save Liberal Education</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Even before the publication of Stephen Marche’s <em>Atlantic </em>piece, “<a href="https://www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371/" rel="">The College Essay is Dead</a>,” there had already been discussions about AI writing programs like ChatGPT in the academy. But the past few months have seen a flurry of activity with college administrators calling emergency meetings, professors changing their assignments, and educators writing essays (some perhaps written by AI?) that range in reaction from the nonchalant to the apocalyptic about the fate of college writings, <a href="https://americanmind.org/salvo/the-botfire-of-the-humanities/" rel="">the future of liberal education</a>, and <a href="https://www.insidehighered.com/views/2023/02/09/chatgpt-plague-upon-education-opinion" rel="">the outlook of higher education</a>.</p>
<p>While the glitches and weaknesses of ChatGPT have been pointed out, these presumably will be corrected over time as the AI technology improves and its data set enlarges. There has also been discussion about cybersecurity and the economic and political implications that AI poses for societies. But most of the public discourse so far has been about higher education.</p>
<p>Unlike most academics who are skeptical, suspicious, or resigned about ChatGPT, I am hopeful, believing that AI could offer a genuine path for liberal education to revitalize itself in the university. Keep in mind I am not arguing from some techno-utopian perspective where transhumanism is the answer to everything—I also have reservations and concerns about the ubiquitous adoption of technology in our contemporary lives. But I do think technologies like ChatGPT could return American higher education to the fundamental questions of human identity, meaning, and flourishing that have been pushed aside the past fifty years for economic credentialing.</p>
<p><strong>The Reactions So Far</strong></p>
<p>For those not familiar with AI programs like ChatGPT, they are chatbots—computer programs to simulate conversations with humans—that predict what words and phrases should come next. As AI, they continually learn as they gather more data from human interaction and from texts like articles, books, and websites. The GPT-3 model, for example, was “trained” on a text set that included 8 million documents and over 10 billion words. While there are other AI chatbot programs, ChatGPT received the most attention when its third prototype launched this past November. That’s mainly because its technology convincingly mimicked human writing, though the business also had a superb marketing strategy that resulted in an estimated $29 billion valuation of OpenAI, ChatGPTS’s parent company.</p>
<p>The most common response from the academy has been resignation mixed with suspicion. Faculty know there is nothing they can do to stop AI from entering the academy. All that is left is to adjust and accommodate in the hope professors can retire before human teaching is entirely replaced. Recommendations include in-class writing examinations, assigning content behind a paywall, adopting show-and-tell exercises, and employing AI to teach students how to write better. While these suggestions are useful in identifying what constitutes human writing, they are only stop-gap measures before AI passes the Turing Test.</p>
<p>Strangely, one response to ChatGPT that is notably absent is an eagerness to discuss how AI can help students with disabilities in their writing. One would think that ChatGPT could help students with disabilities to learn how to write better or do their writing assignments. AI could possibly open the doors of higher education to a new set of students who may have difficulty completing writing assignments, for example, those with dyslexia or dysgraphia. In this sense, AI could expand access to higher education in ways that were previously not thought possible.</p>
<p>A second response has been skepticism. AI will not replace human writing because AI does not interact directly with the world and, therefore, cannot represent it as humans do. As <a href="https://lawliberty.org/what-humanity-adds/" rel="">John O. McGinnis</a> puts it, “ChatGPT is just connected to the words people have written about the world, not the world itself. It floats on the vast sea of verbiage we have created and is not connected directly to the actual sea.” Practically, this is evident when, after entering a writing prompt in AI, professors evaluate them as acceptable in structure (for example, English’s composition five-paragraph essay) but point out factual mistakes or raise aesthetic questions about its writing, like its “voice” or its inability to engage the reader emotionally. According to this group, ChatGPT is a good facsimile of freshman undergraduate writing, but it is only a facsimile.</p>
<p>I suspect, however, that these obstacles will be overcome in time as the technology in AI improves. Its neural network will eventually process data more efficiently, and its learning capacity will improve as it gathers more data from additional texts and human interactions. It also raises the philosophical, <a href="https://www.bostonreview.net/articles/henry-farrell-philip-k-dick-and-fake-humans/" rel="">Philip K. Dick type of question</a> of what constitutes human writing if an AI can do it as well as people (a question better pursued another time). Regarding AI not being connected to the “actual sea” of reality, humans have been interfacing with AI for years, from purchasing airline tickets to visits to the doctor. Unfortunately for our skeptics, AI has been here for a while and is not going away.</p>
<p>A third response, a variation of the second, has been the belief that ChatGPT will never replace human skills like critical and creative thinking. AI may replace human writing, but not human thinking itself. But this may not be true, as evidenced by AI programs like <a href="https://www.youtube.com/watch?v=WXuK6gekU1Y" rel="">AlphaGo</a>, which has defeated the best Go players in the world, a feat that required both critical and creative thinking. If AI can think critically and creatively, there is no reason it couldn’t be designed to educate students in these skills in the near future.</p>
<p>In fact, ChatGPT’s ability to convincingly mimic human writing is actually a reflection of critical and creative thinking. Like numerical manipulation, writing is essentially about solving problems, whether they are about politics, policy, philosophy, aesthetics, or something else. What makes ChatGPT so unnerving to so many is that writing is perceived as a uniquely human endeavor, unlike numerical calculations. But both tasks—whether solving interpolation problems or writing philosophical essays—are fundamentally the same. They are solving problems—to think critically and creatively. AI programs can now do this both numerically and linguistically, albeit the latter imperfectly at the moment. With the advent of AI, the rationale that only humans can teach students critical and creative thinking has a limited shelf life.</p>
<p>Perhaps more controversially, it is not clear that universities actually teach critical thinking to their students. With every college course now required to articulate its student learning outcomes (SLOs)—outcomes that have to be quantified and measured—in order to demonstrate students are learning, one wonders, are these SLOs really telling us anything of value? Now that assessment drives academic content in American universities, the result is a flat account of critical thinking, where one number represents excellence and another number mediocrity. As one essay’s subheading about ChatGPT states, “In a world where students are taught to write like robots, it’s no surprise that a robot can write for them.”</p>
<p><strong>Embrace the Future</strong></p>
<p>The conversation about ChatGPT so far has mainly focused on the effects it will have on the humanities. Putting aside the ideological nature of the humanities and the problems <a href="https://lawliberty.org/book-review/greatest-university-ever/" rel="">that assessment</a> poses to student learning, I think that humanities professors will be relatively better off compared to their faculty peers when AI is fully adopted by the university. The problem won’t be the mass unemployment of English, history, philosophy, classics, or theology professors (this already is a problem); rather, the problem will be the mass unemployment of STEM (science, technology, engineering, mathematics) and pre-professional faculty. AI presents a greater threat to faculty teaching courses like Fixed Income Securities, Health Assessment, and Biochemistry than to faculty helping students understand and enjoy the texts of Homer, Aquinas, and Nietzsche.</p>
<p>In other words, the subjects taught by STEM and pre-professional faculty appear to be most likely to be replaced by AI in the future. These subjects require numerical critical thinking in making assessments about populations—something that AI does as well as, if not better, than humans now. For example, some AI programs have better diagnostic accuracy than human doctors, and last year an AI’s stock picks generated a higher price return than the S&amp;P 500. Some are currently discussing whether AI will replace engineers, nurses, and accountants in the near future. The question that parents should ask their college-aged children now is not what they are going to do with that English degree, but rather, will there even be a job available when their civil engineer, nursing, or accounting major graduates?</p>
<p>Even more depressing for STEM and pre-professional faculty is the rise of <a href="https://www.insidehighered.com/news/2020/08/27/interest-spikes-short-term-online-credentials-will-it-be-sustained" rel="">alternative credentialing programs</a>. Businesses like Google, Bank of America, GM, IBM, and Tesla have removed the college degree requirement for any positions in their companies. In some states, one can become a teacher at a private school without having an education degree. As AI improves its numerical and linguistic critical thinking skills, companies are likely to incorporate AI into their pre-screening and training of employees. There is also great potential for growth in alternative credential agencies, which can certify students in certain skills, and much will likely be available free online. All these trends challenge the university’s primary status as a credentialer and signaler to employers who can think and write.</p>
<p>This, in turn, raises the question of why parents should shell out tens of thousands of dollars every year for their children to attend college when they can learn free online, get accredited elsewhere cheaper and quicker, or be trained by their employer. For the elite universities—the Harvards, the Yales, the Stanfords—this is not likely to be an issue because the opportunity to network with children of the elite will outweigh any financial cost or lack of learning. But for those institutions in the mid-and low-tier, such as public regional comprehensive schools, AI poses an existential threat, especially if their funding model is based on STEM and pre-professional students. Granted, this process may take a few generations or a few years, but at some point in the future, the rationale for universities to teach STEM and pre-professional students will be weakened, if not outright disappear.</p>
<p>If the news about AI is bad for schools that rely on their STEM and pre-professional programs, it could be good for those universities that have a clearly defined mission and identity rooted in liberal education. If liberal education is to study something <a href="https://lawliberty.org/lets-save-liberal-education-by-rethinking-it/" rel="">for its own sake</a> in order for us to reflect upon who we are and what our purpose in life is, then this can be best accomplished by studying the humanities. By reading and discussing literature, history, philosophy, and other traditions of the humanities, students learn the inherent value of liberal education—to be free from the demands of necessity and call for utility in order to be connected to what authentically makes one a human being.</p>
<p>With AI, the point of university education might shift. It is no longer about the acquisition of economic or critical skills but about becoming a free and reflective human being. One enrolls in college because it is primarily understood as an intrinsic good for human flourishing. If you want a job, go learn AI on the Internet (although conceivably, AI could be incorporated as part of a liberal education). Strangely, we may, in the future, return to Plato’s Academy and Bologna University, where higher education was about contemplative learning, allowing students to reflect upon the fundamental and existential questions of identity, meaning, and purpose in their lives.</p>
<p>One potential concern is whether liberal education would be reserved only for the elite—really reflecting Plato’s Academy, where only the upper class could participate—while most of the populace is being trained by or replaced by AI. This is particularly problematic in a democratic society where inequality is currently a prominent topic in the public discourse. But it is also possible that a widely accessible liberal education may be available, as evident in the rise of the classical school movement, which places the humanities at the heart of its curriculum, or looking at past attempts like Robert Hutchins’ and Mortimer Adler’s <a href="https://www.amazon.com/Dream-Democratic-Culture-Mortimer-Intellectual/dp/0230337465" rel="">Great Books program</a> which believed liberal education was necessary for the survival of democracy.</p>
<p>Since the turn of the century, concerns about the place and relevance of liberal education in the American university have continued unabated. ChatGPT appears to put another nail in the coffin of liberal education; however, a closer look suggests it could be the key to liberal education’s resurrection. With employment demands, assessment requirements, and skill training gone, what is left for the university to do in the age of AI? To study things for their own sake—and only liberal education can provide that.</p>
<p>The post <a href="http://www.miskatonian.com/2023/12/13/how-ai-could-save-liberal-education/">How AI Could Save Liberal Education</a> appeared first on <a href="http://www.miskatonian.com">The Miskatonian</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>http://www.miskatonian.com/2023/12/13/how-ai-could-save-liberal-education/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
