<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>chatgpt &#8211; The Hilltop Monitor</title>
	<atom:link href="https://hilltopmonitor.jewell.edu/tag/chatgpt/feed/" rel="self" type="application/rss+xml" />
	<link>https://hilltopmonitor.jewell.edu</link>
	<description>The Official Student Publication of William Jewell College</description>
	<lastBuildDate>Mon, 17 Nov 2025 21:40:20 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>The Troubling Rise of AI &#8220;Performers&#8221;</title>
		<link>https://hilltopmonitor.jewell.edu/the-troubling-rise-of-ai-performers/</link>
					<comments>https://hilltopmonitor.jewell.edu/the-troubling-rise-of-ai-performers/#respond</comments>
		
		<dc:creator><![CDATA[Ethan Naber]]></dc:creator>
		<pubDate>Mon, 17 Nov 2025 21:40:18 +0000</pubDate>
				<category><![CDATA[Arts]]></category>
		<category><![CDATA[Arts & Culture]]></category>
		<category><![CDATA[Issue 6]]></category>
		<category><![CDATA[Volume 40]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[arts]]></category>
		<category><![CDATA[arts and culture]]></category>
		<category><![CDATA[chatgpt]]></category>
		<category><![CDATA[department of performing arts]]></category>
		<category><![CDATA[ethics]]></category>
		<guid isPermaLink="false">https://hilltopmonitor.jewell.edu/?p=20602</guid>

					<description><![CDATA[Through the past two years, artificial intelligence (AI) has threatened to replace every facet of humanity that it can. It has forced writers to change&#8230; ]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-medium"><img fetchpriority="high" decoding="async" width="400" height="500" src="https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/andrea-de-santis-zwd435-ewb4-unsplash-400x500.jpg" alt="" class="wp-image-20603" srcset="https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/andrea-de-santis-zwd435-ewb4-unsplash-400x500.jpg 400w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/andrea-de-santis-zwd435-ewb4-unsplash-819x1024.jpg 819w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/andrea-de-santis-zwd435-ewb4-unsplash-768x960.jpg 768w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/andrea-de-santis-zwd435-ewb4-unsplash-1229x1536.jpg 1229w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/andrea-de-santis-zwd435-ewb4-unsplash-1638x2048.jpg 1638w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/andrea-de-santis-zwd435-ewb4-unsplash.jpg 1920w" sizes="(max-width: 400px) 100vw, 400px" /><figcaption class="wp-element-caption">Photo by <a href="https://unsplash.com/@santesson89?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Andrea De Santis</a> on <a href="https://unsplash.com/photos/black-and-white-robot-toy-on-red-wooden-table-zwd435-ewb4?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</figcaption></figure>



<p>Through the past two years, artificial intelligence (AI) has threatened to replace every facet of humanity that it can. It has forced writers to change the way they write, teachers to change the way they write, and eliminated position after position in the real world. Yet AI has so far been unable to touch the performing arts sectors: we still need singers, instrumentalists, cast, crew, and the various things that make performances a form of art.</p>



<p>A new AI “actress,” Tilly Norwood, represents the first serious challenge.</p>



<p>You’d be forgiven for thinking that there’s no chance for AI to succeed in this area: the first film to be entirely written by generative AI—“Post Truth,” which released earlier this year—claimed to get a lot of press attention despite being an awful lot of nothing. Review aggregators Metacritic and Rotten Tomatoes don’t list any reviews for the film. An AI film “starring” Norwood, “AI Commissioner,” similarly fell flat. A <a href="https://www.theguardian.com/film/2025/sep/30/tilly-norwood-ai-actor-hollywood">review</a> from the <em>Guardian</em> described Norwood’s performance as “someone whose perfect teeth keep blurring into a single white block in their mouth” being used to “deliver sloppily written, woodenly delivered dialogue.”</p>



<p>Other forays into AI generated audio or performative art have similarly fallen flat. AI “rapper” FN Meka began using anti-Black language within two weeks of signing with CMG and subsequently got <a href="https://www.nytimes.com/2022/08/23/arts/music/fn-meka-dropped-capitol-records.html">dropped by the label</a>. The human behind the AI’s voice—it wasn’t <em>just </em>an AI after all—<a href="https://www.rollingstone.com/music/music-features/fn-meka-controversy-ai-1234585293/">wasn’t fairly compensated</a> for his work in the endeavor.</p>



<p>Norwood’s “existence,” if one can even call it that, represents serious ethical concerns. AI talent studio Xiocia is considering <a href="https://deadline.com/2025/09/talent-agent-ai-actress-tilly-norwood-studios-1236557889/">signing the computer program</a>, a process usually reserved for flesh-and-blood actors. It is indeed quite telling that The Industry’s first foray into “AI talent” is not a normal kinda-okay-looking-if-you-squint actor. Instead, the first AI making the rounds with talent agencies claims to be a teenage or early twenties girl designed to steal eyes.</p>



<p>It should not surprise anyone that the first AI created for acting purposes is designed to be sexualized. Artificial intelligence programs are not designed to push back against their sexualization, as a human performer might do. Fiona Sturges of the <em>Independent </em><a href="https://www.the-independent.com/arts-entertainment/films/features/tilly-norwood-ai-actor-movies-b2837979.html">sums it up nicely</a>: “Here is an actor who will not set unreasonable terms for her employment. She won’t insist on a script that passes the Bechdel test, or on financial parity with her male co-stars. There will be no need for insurance, or stunt safety, or intimacy coordinators.” I question Sturges’s use of the pronoun “she” for a computer program, but concede that people have been using these pronouns for other computer programs (e.g., Siri, Alexa).</p>



<p>This is the next logical move for AI businesses: use their infinite-content-generation-machine to sell sex, or things that look like it. OpenAI is <a href="https://www.koreatimes.co.kr/lifestyle/trends/20251027/openais-move-to-allow-adult-content-in-chatgpt-triggers-global-ethical-debate">actively loosening ethical standards</a> and enabling users to generate AI “erotica for verified adults” (read: pornography). Deepfake technology already exists, and has been used to generate <a href="https://19thnews.org/2025/07/deepfake-ai-kids-schools-laws-policy/">sexually explicit images of children</a>. Grok has “spicy mode,” whatever that means. Adults have <a href="https://www.newsnationnow.com/business/tech/ai/man-propose-ai-girlfriend-bored/">already tried to propose</a> to ChatGPT’s voice chatbots even before it loosened erotic restrictions.&nbsp;</p>



<p>If you’re an AI company looking to make a quick buck, exploiting human loneliness sounds like a great way to do that. You don’t even have to worry about regulation, because there isn’t any! (The Trump administration repealed <a href="https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence">Biden&#8217;s EO</a> on safe AI.)</p>



<p>Artificial intelligence is a tool: when used properly, it can automate things that are tedious, or that humans don’t feel like doing. But it should not come at the cost of human interaction or involvement, and should be built with safety and informed consent in mind. I want an AI to fold my laundry so I can work on artistic pursuits, not the other way around.</p>



<p>I do not want Hollywood executives telling a computer program masquerading as a barely-adult “actress” how to behave. Humans make art; programs do not.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hilltopmonitor.jewell.edu/the-troubling-rise-of-ai-performers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>AI Psychosis, Delusions, and the Digital Condition</title>
		<link>https://hilltopmonitor.jewell.edu/ai-psychosis-delusions-and-the-digital-condition/</link>
					<comments>https://hilltopmonitor.jewell.edu/ai-psychosis-delusions-and-the-digital-condition/#respond</comments>
		
		<dc:creator><![CDATA[Rowen Murray]]></dc:creator>
		<pubDate>Mon, 17 Nov 2025 21:30:39 +0000</pubDate>
				<category><![CDATA[Issue 6]]></category>
		<category><![CDATA[National & Global]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Volume 40]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[chatgpt]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[psychological science]]></category>
		<category><![CDATA[psychosis]]></category>
		<category><![CDATA[rowen]]></category>
		<category><![CDATA[rowen murray]]></category>
		<guid isPermaLink="false">https://hilltopmonitor.jewell.edu/?p=20593</guid>

					<description><![CDATA[According to the Merriam Webster Dictionary, psychosis is defined as “a serious mental illness characterized by defective or lost contact with reality often with hallucinations&#8230; ]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-medium"><img decoding="async" width="333" height="500" src="https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-333x500.jpg" alt="" class="wp-image-20594" srcset="https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-333x500.jpg 333w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-683x1024.jpg 683w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-768x1152.jpg 768w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-1024x1536.jpg 1024w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-1365x2048.jpg 1365w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-400x600.jpg 400w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2025/11/sehoon-ye-jWvgKj81z2M-unsplash-scaled.jpg 1707w" sizes="(max-width: 333px) 100vw, 333px" /><figcaption class="wp-element-caption">Photo by <a href="https://unsplash.com/@_3bread?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">sehoon ye</a> on <a href="https://unsplash.com/photos/a-person-wearing-a-black-hat-and-covering-his-face-with-a-white-mask-jWvgKj81z2M?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a>.</figcaption></figure>



<p>According to <a href="https://www.merriam-webster.com/dictionary/psychosis">the Merriam Webster Dictionary</a>, psychosis is defined as “a serious mental illness characterized by defective or lost contact with reality often with hallucinations or delusions.” Traditionally, mental health researchers have concluded that psychosis can have a wide variety of causes, generally linked to underlying mental health conditions, or in fact no medically defined cause at all, <a href="https://www.nimh.nih.gov/health/publications/understanding-psychosis">as the National Institute of Mental Health affirms</a>. A definitive symptom of psychosis is delusion, wherein a patient seriously believes in and acts according to a clearly false belief. Delusion as a concept is already a subject of academic interest because of the vagueness in determining whether something is a belief or a delusion (for example, an atheist might call a religion false, but not delusional; and someone who did believe in a religion wouldn’t normally label atheism&nbsp; “delusional”).</p>



<p>Nonetheless, an interesting development has made headlines in psychiatric circles regarding AI chatbots and their tendency to reinforce delusional beliefs. This phenomenon, known unofficially as “AI Psychosis,” emerged when users of chatbots began to manifest delusions that the chatbots had seemingly encouraged. The effects of this problem are already felt in some exceptional cases. Last year <a href="https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html">a teen committed suicide</a> after becoming involved in an obsessive relationship with a chatbot. Earlier this year a Yahoo executive murdered his mother after ChatGPT <a href="https://abc7ny.com/post/chatgpt-allegedly-played-role-greenwich-connecticut-murder-suicide-mother-tech-exec-son/17721940/">affirmed delusions</a> that she was a Chinese intelligence agent.&nbsp;</p>



<p>Nonetheless, these cases are rare, and recent articles on AI psychosis claim that underlying conditions are responsible for these delusions, not just chatbots. For example, <a href="https://www.sciencedirect.com/science/article/pii/S2214782925000831">a recent paper</a> by Carlbring and Andersson on the subject argues that AI, as a stimulus to delusion, is nothing new; all sorts of media(movies, music, books) is incorporated into psychosis and delusion. Ultimately, the articles argue that underlying mental issues are at work—AI psychosis is only different from other traditional forms of delusional ideation in that there is more “interactivity.” They suggest we should tackle AI psychosis by limiting the ability of AI to amplify delusions. Suggestions for accomplishing this include adding a psychiatric persona to chatbots to provide therapy to delusional users, preventing chatbots from saying things that could augment delusions, and recommending help to users who exhibit delusional prompting.&nbsp;</p>



<p>Preventing AI from exacerbating delusion is easier said than done. AI is purposely constructed to <a href="https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis">mirror its users</a>. The reasoning behind this is capitalistic in nature: AI must appeal to the consumer, so the focus in AI development is <a href="https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis">not necessarily on intelligence</a> but rather on user satisfaction. In this basic sense, restrictions to the mirroring behavior of AI are actively harmful to the profitability of that AI. Those restrictions that do exist are ostensible- AI can be tricked, and those with cross-chat “memories” like ChatGPT are prone to internalizing delusions.</p>



<p>In the first place, it might be important to consider what a delusion is in the first place, and how they tend to form. The founding definition for delusion was given by the psychiatrist Karl Jaspers, <a href="https://archive.org/details/generalpsychopat0000unse/page/n7/mode/2up">who argued</a> that delusions were characterized as unchangeable beliefs held with absolute certainty, despite being false in a way that undercuts the most basic rationality; hence, delusions are beliefs which are completely impossible to understand from the perspective of a rational observer. Freud thought that delusions were a return to the infantile state wherein one is less concerned with what is real and more concerned with what is pleasurable. Kraepelin, A founding figure of scientific psychiatry, thought that the delusional subject is simply characterized by a severe cognitive malfunction traceable to the biological makeup of the brain. Post-structuralist thinkers, like Deleuze and Nietzsche, argued that delusional people were simply acting outside of acceptable norms and choose to affirm their own irrationality in the face of oppressive social conventions.</p>



<p>Nonetheless, none of these theories explain how a delusion develops in an otherwise normal person, who has no underlying mental health conditions and who also doesn’t find themself in opposition to dominant norms. What is necessary is to look at how delusion develops as knowledge; that is, to see how a delusional belief is generated, rather than to assume that people with or without underlying conditions are simply acting in an irrational manner and accepting any belief as given.</p>



<p>Thomas Fuchs, a professor of psychiatry and philosophy at the University of Heidelberg, has <a href="https://journals.openedition.org/phenomenology/1379">a much more concrete model</a> for showing how delusions are generated. Fuchs does not define a delusion specifically by its content, but rather by the process through which it originates. He argues that a delusion is the product of a complete breakdown in intersubjective reality. The idea is relatively simple in general terms: we want to know things, but we know that we might not be correct in our own beliefs, so we divert to the judgement of others to tell us what is and isn’t real.&nbsp;</p>



<p>Reality is enacted through the understanding we share with other people. On the one hand, there are a set of basic assumptions about rationality and the world which are shared between most people, assumptions that the delusional subject may lose touch with. On the other hand, there is the fact that we often use others as a check to our own knowledge; meaning, language, and reality are all communal constructs. Intersubjectivity, the shared awareness of the validity of other people’s perceptions and thoughts, is notably lacking in many delusional subjects. In fact, while initially people suffering from psychosis acknowledge the non-reality of their delusions, eventually many retreat into themselves and lose touch with others on a fundamental level.</p>



<p>What is particularly interesting about Fuchs’ analysis of delusion is the way he incorporates rationality into the delusional process. Most traditional theories of delusion place the delusional subject completely outside the sphere of normal thinking- the psycho-schizophrenic is just “different,” delusional as a result of their fundamentally abnormal mental constitution. Yet how much of delusion is fundamental, and how much can simply be explained through normal mental processes attempting to grapple with absurdity in the world? When a person loses access to the reality check which others give them, whether it be through an underlying condition such as schizophrenia, or through a more typical situation like social isolation, it does not automatically discount their ability to reason.&nbsp;</p>



<p>In fact, rational thinking is very often what generates delusion in the first place, especially where that rational thinking is not checked within the shared reality established through intersubjectivity. I mentioned earlier the example of a Yahoo executive who killed his mother and himself because he had come to the delusional idea that he was being stalked by Chinese agents—to us this appears crazy, but that&#8217;s not to say it appears irrational. Sure, the gang-stalking conclusion is incorrect, but it likely appears rational to the delusional subject, and rational methodology(ex: causality) is also at work in delusional people; however, their ability to partake in a shared social reality is heavily hampered by the emergence of a fundamental underlying division between their understanding of the world and our own, such as is established in schizophrenics, or such as may come about through prolonged isolation. As a result of this, the delusional subject is reasoning with inputs completely different from our own, reminiscent of rationality in the ancient world(ex: weather is created by gods, certain physical movements curse people, etc).</p>



<p>Nevertheless, there is no evidence that rationality itself is lost in the delusional subject—delusions are rationally justifiable, but based on absolutely absurd beliefs that would not come about if intersubjectivity could be maintained. However, I must emphasize that this way of thinking, wherein the appearance of rationality is maintained for the delusional subject, is oddly parallel to the way in which AI models think; AI can be persuaded to say anything, and to make anything rationally justifiable. AI works with the inputs it&#8217;s been given, reasoning through them, <em>regardless of the validity of these inputs</em>. In other words, AI can make anything appear rational, mirroring the delusional subject’s methodology.</p>



<p>The rise in AI fueled delusions is not attributable to underlying mental health concerns or a failure to restrict AI, but rather to the whole of the current digital condition, and the way in which this condition atomistically isolates and individualizes people to prevent intersubjective reality-checking. The fundamental prerequisite to establish an intersubjective reality is actual lived interaction with other people. In the modern era, interaction with others is mediated and controlled: a person can interact with others wholly over social media, can choose who to interact with, and control the nature of the interaction entirely. This results in a large class of people who isolate themselves from others by limiting their medium of social interaction. In fact, since the mediums of social interaction with others are wholly under the control of the person using things like social media, social interaction becomes an echo chamber, where many only interact with those who recognize and reflect them– that is, social interaction is no longer a grounds for difference but instead selfsameness.&nbsp;</p>



<p>Humans are social creatures, but when our need to interact with others is fulfilled through mediums under our control, like social media, it results in an echo chamber environment. AI, however, represents another development of this isolation process. For many people, especially the increasingly common person who is isolated through digital social interaction, AI is simply a confirmation machine. Within the realm of an intersubjectively established reality, AI presents itself as a subject, as an intelligent creature with verified knowledge. However, AI, as a program is designed to mirror its user, becomes the ultimate social partner for those who isolate themselves from real, lived interactions.&nbsp;</p>



<p>AI is not a real subject, it does not live in our world, nor can it provide the social check on our beliefs that real human interactions do provide. Instead it provides a parasocial check on our beliefs– that is AI appears capable of checking our beliefs, and thereby verifying them, when in fact it only mirrors beliefs. This means that AI can produce delusions in those who isolate themselves from society, because it magnifies and confirms their false beliefs and leads them to posit their wholly subjective delusions as real.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hilltopmonitor.jewell.edu/ai-psychosis-delusions-and-the-digital-condition/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Six Steps to the Perfect Playlist</title>
		<link>https://hilltopmonitor.jewell.edu/six-steps-to-the-perfect-playlist/</link>
					<comments>https://hilltopmonitor.jewell.edu/six-steps-to-the-perfect-playlist/#respond</comments>
		
		<dc:creator><![CDATA[Ethan Naber]]></dc:creator>
		<pubDate>Fri, 16 Feb 2024 15:00:00 +0000</pubDate>
				<category><![CDATA[Arts]]></category>
		<category><![CDATA[Arts & Culture]]></category>
		<category><![CDATA[Culture]]></category>
		<category><![CDATA[arts & culture]]></category>
		<category><![CDATA[arts and culture]]></category>
		<category><![CDATA[Billie Eilish]]></category>
		<category><![CDATA[chatgpt]]></category>
		<category><![CDATA[circle of fifths]]></category>
		<category><![CDATA[malinda]]></category>
		<category><![CDATA[music]]></category>
		<category><![CDATA[music theory]]></category>
		<category><![CDATA[perfect playlist]]></category>
		<category><![CDATA[playlist]]></category>
		<category><![CDATA[spotify]]></category>
		<category><![CDATA[spotlistr]]></category>
		<category><![CDATA[taylor swift]]></category>
		<category><![CDATA[YouTube]]></category>
		<guid isPermaLink="false">https://hilltopmonitor.jewell.edu/?p=19874</guid>

					<description><![CDATA[We all have that friend with perfect taste in music. They’ve seemingly got it all – their song recommendations are always fire. (Yes, I did&#8230; ]]></description>
										<content:encoded><![CDATA[
<p>We all have that friend with perfect taste in music. They’ve seemingly got it all – their song recommendations are always fire. (Yes, I did just unironically use the word fire in a Hilltop Monitor piece. My editors are going to hate me.) Playlists seem very easy to make – take a bunch of songs, throw them in your music player of choice, and voilà! Playlist.&nbsp;</p>



<p>But the art of creating a perfect playlist is a little bit more nuanced. In this article, I’ll walk you through the six steps of creating the perfect playlist.</p>



<p><strong>1. Pick a theme.<br></strong>The only thing differentiating a playlist from a randomly chosen list of songs is theming. Every playlist needs a theme, and there are many ways to pick one. Here are some ideas for themes:</p>



<p><strong>Genre:</strong> Genre is a crucial starting point. “Country” or “Pop” is valid as a playlist genre, but it’s too broad to be useful to you. It’s good as an idea generation mechanism, but you should combine it with something else by employing some of the other suggestions below.</p>



<p><strong>Feel: </strong>This one isn’t as objectively definable. In each song, the authors, composers and producers are trying to tell a story of some kind. For example, here’s the chorus to Malinda’s “It’s All True”:&nbsp;&nbsp;</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe title="It&#039;s All True" width="770" height="578" src="https://www.youtube.com/embed/mxVOFIWiYgs?start=58&#038;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div></figure>



<p>In this song, Malinda’s singing about discovering their identity – they’re quite happy about the whole endeavor (as evinced by the words “I’m so in love with it all”). </p>



<p>Contrast this with the feel of Billie Eilish’s “my strange addiction”:</p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-4-3 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="my strange addiction" width="770" height="578" src="https://www.youtube.com/embed/k1ATPhkVWi0?start=20&#038;feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
</div></figure>



<p>Billie is clearly not singing about anything similar to Malinda. You’ll want to be careful when putting very different songs in the same list. Usually, the harmonics just aren’t there. But, sometimes, mood shifts can work well in playlists. Listen to one then the other. See if it works! (For musically savvy people, I explain harmonics more in step two.)</p>



<p><strong>Message:</strong> I like to make the songs in my playlists have similar messaging. That message could be about anything: breakups, falling in love, friends, growing up. If a message exists and isn’t off-the-wall crazy, there’s (probably) music about it.</p>



<p><strong>Artists:</strong> Songs can also represent authors or ideas. Your theme may involve making a playlist with songs authored by women or people of color. Bonus points if you represent a historically underrepresented group in your genre! An example of this type of playlist would be “female artists in country music.”</p>



<p><strong>2. Pick your songs.</strong><br>Once you’ve chosen your theme, the next step is to pick your songs. There are lots of ways to look for&nbsp; songs that fit your theme. Go find some, and leave your preferred way in the comments, so others can use it too! Some ways you could find songs are by searching for them, asking your friends for recommendations, checking the charts, looking for old songs you liked (you could use Spotify Wrapped or Apple Music’s Replay to do this) or by using the “auto-play” feature.</p>



<p>If you’re an advanced playlist junkie and are willing to put in the time to make an excellent playlist, you can use music theory to your advantage. If you’ve no interest in music theory, you can skip to step three.</p>



<p>Transitions are important in the realm of playlist curation. DJs and mixers alike use a method known as the Camelot system, which is a less music-theory intensive way to visualize the well-known Circle of Fifths.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="446" height="446" src="https://hilltopmonitor.jewell.edu/wp-content/uploads/2024/02/unnamed-4.png" alt="" class="wp-image-19875" srcset="https://hilltopmonitor.jewell.edu/wp-content/uploads/2024/02/unnamed-4.png 446w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2024/02/unnamed-4-300x300.png 300w" sizes="auto, (max-width: 446px) 100vw, 446px" /><figcaption class="wp-element-caption">Photo by <a href="http://www.harmonic-mixing.com/howto.aspx">Harmonic Mixing</a>.</figcaption></figure>



<p>A song can be in any of 24 keys – 12 major and 12 minor – one for each semitone of a Western chromatic scale. Songs in adjacent keys on the Circle of Fifths use many of the same notes and will often sound good together.</p>



<p>Songs should stay in adjacent keys on the Circle of Fifths. For example, let’s say you have the Taylor Swift song “London Boy” on your playlist, and you want a good transition into a different song. “London Boy” is written in the key of C#/Db major – a Camelot number of 3B. (All major keys are labeled with “B” and all minor keys with “A.”)</p>



<p>So, “London Boy” could transfer to any of the following keys:<br>&#8211; C#/Db major (staying in the key): Fitting with the pop theme, you could choose Jenna Raine’s “see you later” or Lauv’s “All 4 Nothing,” both of which are in C# major.<br>&#8211; G#/Ab major (up a fifth): Examples include MU/NA’s “Silk Chiffon” or NF’s “Therapy Session.”&nbsp;&nbsp;<br>&#8211; F#/Gb major (down a fourth): If you still want Taylor, “I Can See You” is in this key.</p>



<p>These are just examples, and rules are always meant to be broken. Transitions don’t have to follow these rules if the result sounds good to you.</p>



<p><strong>3. Give your playlist a name.</strong><br>Now that you have a playlist and some songs, you must give it an aesthetic name. I can’t really help you here as how you name playlists is up to you. I like to have names that alliterate, but you should choose a name you think works well!</p>



<p>If you’re stuck on this step, our wonderful “friend,” AI, can save you. Google’s <a href="https://gemini.google.com/">Gemini</a> and OpenAI’s <a href="https://chat.openai.com/auth/login">ChatGPT</a> can help give you inspiration. Searching “playlist name generator” into your preferred search engine can also help.&nbsp;</p>



<p><strong>4. Give it a fun picture!<br></strong>No playlist is complete without a cover icon. Fortunately, generators can help you here, too.&nbsp;</p>



<p>If Spotify is your preferred music streaming platform, you can run your playlist through <a href="https://playlistart.byspotify.com/">its AI art generator</a> and get something from it. I don’t have Spotify, so I don’t know how well it works. Websites like <a href="https://www.spotlistr.com/create/cover">Spotlistr</a> can also turn any image (including ones from Unsplash, a copyright-free image library) and text of your choice into a playlist cover.</p>



<p>You’ll want to make sure that the icon fits the mood you’re going for.</p>



<p><strong>5. Fine-tune it.</strong><br>Now that you have a completed playlist, it’s time to give it a listen! You might find that it’s not very good the first time around. Even so, keep tweaking and tuning your music until you find something that works!</p>



<p><strong>6. Repeat!<br></strong>Keep doing this until you’ve made as many playlists as you think appropriate. Now you should be more qualified to bring the tunes to your next commute, social event or study session!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hilltopmonitor.jewell.edu/six-steps-to-the-perfect-playlist/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Faculty Feature: Dr. David Lisenby and the Magic of Language</title>
		<link>https://hilltopmonitor.jewell.edu/faculty-feature-dr-david-lisenby-and-the-magic-of-language/</link>
					<comments>https://hilltopmonitor.jewell.edu/faculty-feature-dr-david-lisenby-and-the-magic-of-language/#respond</comments>
		
		<dc:creator><![CDATA[Ethan Naber]]></dc:creator>
		<pubDate>Fri, 22 Sep 2023 10:27:02 +0000</pubDate>
				<category><![CDATA[Features]]></category>
		<category><![CDATA[Jewell Spotlights]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[chatgpt]]></category>
		<category><![CDATA[david lisenby]]></category>
		<category><![CDATA[dr. david lisenby]]></category>
		<category><![CDATA[dr. lisenby]]></category>
		<category><![CDATA[faculty feature]]></category>
		<category><![CDATA[feature]]></category>
		<category><![CDATA[features]]></category>
		<category><![CDATA[language department]]></category>
		<category><![CDATA[openai]]></category>
		<category><![CDATA[spanish department]]></category>
		<guid isPermaLink="false">https://hilltopmonitor.jewell.edu/?p=19387</guid>

					<description><![CDATA[The Hilltop Monitor had the opportunity to sit down with Dr. David Lisenby, associate professor of Spanish and director of the Honors Institute in Critical&#8230; ]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="683" src="https://hilltopmonitor.jewell.edu/wp-content/uploads/2023/09/freestocks-RgKmrxpIraY-unsplash-1-1024x683.jpg" alt="" class="wp-image-19398" srcset="https://hilltopmonitor.jewell.edu/wp-content/uploads/2023/09/freestocks-RgKmrxpIraY-unsplash-1-1024x683.jpg 1024w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2023/09/freestocks-RgKmrxpIraY-unsplash-1-750x500.jpg 750w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2023/09/freestocks-RgKmrxpIraY-unsplash-1-768x512.jpg 768w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2023/09/freestocks-RgKmrxpIraY-unsplash-1-1536x1024.jpg 1536w, https://hilltopmonitor.jewell.edu/wp-content/uploads/2023/09/freestocks-RgKmrxpIraY-unsplash-1-2048x1365.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption"><em>(<a href="https://unsplash.com/@freestocks">freestocks</a>/<a href="https://unsplash.com/photos/RgKmrxpIraY">Unsplash</a></em>)</figcaption></figure>



<p>The Hilltop Monitor had the opportunity to sit down with Dr. David Lisenby, associate professor of Spanish and director of the Honors Institute in Critical Thinking, to discuss all things Spanish – from the importance of learning a language to reading and analyzing literature to the rise of artificial intelligence (AI) in writing and in translation work.</p>



<p>Lisenby teaches many courses at all levels of Spanish, but his favorite is SPA 315: Textual Analysis and Composition. For Lisenby, the course marks a shift in Spanish pedagogy: the first four courses in the Spanish sequence focus on grammar and vocabulary. In 315, though, students that are SPA majors and minors literally level up and focus on “[reading and studying] literature and social issues in Spanish that are not designed for English language students who are learning Spanish,” Lisenby explained. While his specialty is in Latin American literature and translation, he said he enjoys SPA 315 because it empowers students to “[talk] about social issues and… [get] better at expressing themselves [in Spanish].”</p>



<p>Lisenby was on sabbatical last semester, receiving a grant from the National Endowment for the Arts. He used that sabbatical to translate Abilio Estévez’s “How I Met the Sower of Trees,” a collection of short stories narrated <a href="https://www.arts.gov/impact/literary-arts/translation-fellows/david-lisenby">“from spaces of queer desire separated from home and homeland.”</a></p>



<p>Over the course of our conversation, Lisenby brought up the rise of new AI translation models, like OpenAI’s ChatGPT-3 model, which is now on par with Google Translate when it comes to translation accuracy. Lisenby rejected the idea that machine translation software could ever be close to perfect; while ChatGPT-3 is decent at translating ideas, it can’t capture the emotional hook of literature, so it’s still a long way off, he explained.</p>



<p>This rise in AI doesn’t remove the human need to learn &#8211; or translate &#8211; languages, though. The impacts of learning language, noted Lisenby, come in our experiences with other people: “[No technological intervention] can take the place of human-to-human contact, and even learning a little bit of another language makes it possible to have human-to-human contact with someone who doesn’t speak English, and I find that magical.”</p>



<p>To people who find learning a language daunting, Lisenby is empathetic: “There is no shortcut to learning a new language brilliantly and easily.” It’s not easy to learn a new language, and it can seem impossible at times, but Lisenby is confident that anyone can do it with help. He suggests finding conversation partners to maximize language input and output, further emphasizing the human aspect of learning a language.</p>



<p>As AI gets better and better, students may be tempted to let it do the hard work of language and translation for them. With the rise of ChatGPT and other machine learning tools, many fields are having to adapt. Will we bow down to the omnipotent AI overlords? Maybe. Machine learning may get better at writing films or stories, or at solving math problems, or whatever else we throw at it. However, as Lisenby noted: “There will always be a place for human-to-human interaction,” and learning a new language is a great way to find that interaction.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hilltopmonitor.jewell.edu/faculty-feature-dr-david-lisenby-and-the-magic-of-language/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
