Is ChatGPT the end of trust? Will the college essay survive?

the end of the college essay chat gpt

Hello and welcome to December’s special edition of Eye on A.I.

Is ChatGPT the end of trust? That’s what some people are suggesting after the release of OpenAI’s chatbot ChatGPT, which is shockingly good at mimicking human writing in almost any format, from computer code and poetry to blog posts and polemical essays. Much of what the chatbot spits out is factually accurate. But much of it isn’t. And the problem is that there is no easy way for a user to ensure that ChatGPT’s responses are accurate. ChatGPT expresses both fact and fiction with equal confidence and style.

Never mind that the written word has had trust issues since the very beginning of writing. (Ancient scribes were often propagandists and fabulists after all.) There does seem to be something different about the way ChatGPT can create fluent and confident answers to almost any question in less than a second—and right now since OpenAI isn’t charging for it, it does so at zero cost to the user. Before, creating a convincing fraud would take time and serious effort. But tools like ChatGPT mean that the marginal cost of creating misinformation has essentially dropped to zero. That means we are likely to see an explosion of it .

Some say we have already seen the first victim of this misinformation eruption: Stack Overflow, a site that provides community-sourced answers to people’s computer coding questions, had to bar users from submitting answers created by ChatGPT after being overwhelmed with responses created by the chatbot. The problem, Stack Overflow said, is that the answers seemed very convincing, but were actually wrong, and it was taking their community moderators too long to vet all the answers and discover the flaws.

Things are going to get a lot worse if one of the new advocates for open-sourcing A.I. models decides to build a ChatGPT clone and make it fully available for free as an open-source project. (Right now OpenAI still controls the model behind ChatGPT and users can only query the model through an interface that OpenAI could shut down, or impose charges to use, at any time. Its terms of use also bar people from using the bot to run misinformation campaigns.) Already Ehmad Mostaque, the former hedge fund manager who runs Stability AI, the company that helped train and open-source the popular text-to-image system Sable Diffusion, has asked his Twitter followers whether Stability should create an open source version of ChatGPT.

As part of its release of ChatGPT, OpenAI also released an A.I. system that can detect whether text was created using ChatGPT. The open-source A.I. startup Hugging Face hosts an interface to that ChatGPT detector on its website and, in experiments, Casey Fiesler, a professor of information science at the University of Colorado at Boulder, said on Twitter that when she fed the detector five student-written essays and five created using ChatGPT, it flagged all five ChatGPT-made ones with 99.9% confidence. But some researchers say they doubt the detector will work on all future versions of the A.I. system, or will work for any similar, but not identical, large language models that others train. Earlier research on large language models had found that A.I. systems were poor at differentiating between A.I.-created and human-written text.

One area where many people think ChatGPT and similar systems will have an immediate and profound effect is education. Many are saying such systems mean the end of using any kind of write-at-home essay or report for student assessments. It might mean the end of college application essays and term papers. The Atlantic had a very good piece examining this last week. I asked a friend of mine who is a university professor what he thought and he answered unequivocally that the term paper was finished. He said he thought professors will have to rely solely on proctored exams where students are asked to hand write their essays (or type on computers that they can prove are not attached to the Internet.)

Kevin Scott, Microsoft’s chief technology officer, said at Fortune’ s Brainstorm A.I. conference in San Francisco last week that teachers who were wringing their hands about ChatGPT were making “a pedagogical mistake” in confusing the essay, which he said was simply “an artifact,” with the important information that the teacher is actually trying to ensure that the student has learned. He seemed to say that ChatGPT would no more destroy the teaching of humanities than the calculator had destroyed the teaching of mathematics. “In a sense, nothing really is changing here other than you have this tool, and the student themselves has to become the teacher to the model,” he said, meaning that the student will still have to go over the answer that the large language model produces and ensure that it is not making up information. The student, for now, would still have to provide accurate citations of where the information was coming from. “Your job is: be the editor for this thing, be the teacher, coax it into getting you the output that you really need. That’s the important thing, that’s the special thing about us. The thing is just a tool.”

Scott is not alone in the view that ChatGPT could actually be great for education. But I think Scott and others are missing something here. Teachers use essays for more than just assessing what facts a student has learned. That may be true in elementary school. But at the high school and, certainly at the university level, teachers use essays not simply to see what facts students know but if they can use those facts to make a well-reasoned argument. The facts are simply supporting evidence. They are necessary but not sufficient in order to get top marks. Teachers also use essays to assess how well the student is able to express their ideas in writing—how graceful his or her prose is, can the student come up with original and appropriate metaphors, etc.

Perhaps most importantly, it is difficult to separate the act of composition from the act of thinking—by writing a person is forced to structure their thoughts, refine their ideas, marshal evidence, and consider counter-arguments. There’s a lot of learning that takes place in the act of composition itself. Most of that disappears when ChatGPT or its successor bots can simply churn out page after page of well-written and well-structured prose and the student is reduced to being a mere fact-checker and annotator. We write not merely to convey information, but to conjure and refine it.

Read on for a few more A.I.-related stories from the past week. Fortune’ s newsletters, including Eye on A.I., are going on hiatus for the holidays. The next Eye on A.I. will be in your inboxes on Jan. 10. In the meantime, happy holidays and a happy new year to you all! See you in 2023.

Jeremy Kahn [email protected] @jeremyakahn

A.I. IN THE NEWS

Illustrators bemoan the ease with which A.I. tools like ChatGPT and Midjourney are allowing anyone to create children’s books. Time magazine chronicles what happened after Ammaar Reshi created a children’s book without having to do any of the writing or illustrating himself and began selling the self-published book on Amazon. But artists protested that such technology was profiting off their own work since A.I. systems like GPT and Midjourney are trained on vast databases of existing human-created images and text. “The main problem to me about A.I. is that it was trained off of artists’ work,” Adriane Tsai, a children’s book illustrator, told Time . “It’s our creations, our distinct styles that we created, that we did not consent to being used.”

DeepMind’s code-writing A.I. AlphaCode called a ‘stunning’ advance. That’s according to Science magazine , which published a peer-reviewed version of DeepMind’s research on a coding bot that could compete successfully against human coders. (AlphaCode had initially been announced back in February.) AlphaCode can solve 34% of assigned coding problems, a performance that far exceeds a competing system called Codex that OpenAI debuted in 2021. In online coding competitions with at least 5,000 competitors, the system outperformed 45.7% of human programmers.

Alphabet employees worry the company is falling behind in the race to commercialize advanced A.I. technology. That’s according to a report in CNBC that said at a recent all-hands company meeting, employees questioned the company’s decision not to release its own powerful chatbot A.I., which is called LaMBDA, more widely in light of the surge in popularity around OpenAI’s ChatGPT. Right now, LaMBDA is only available to researchers inside Google and a handpicked group of others, with very limited public access through Google’s A.I. Test Kitchen. According to CNBC, Alphabet CEO Sundar Pichai and Jeff Dean, the long-time head of Google’s A.I. division, responded to the question by saying that the company has similar capabilities to OpenAI’s ChatGPT but that the cost if something goes wrong would be greater because people have to trust the answers they get from Google.

Latest in Newsletters

UPS CFO Brian Newman will leave the company on June 1.

UPS CFO Brian Newman is departing. But the firm’s CEO is a finance veteran who helped Home Depot survive

Facebook bungled instances of election interference in 2016, forcing CEO Mark Zuckerberg to testify before Congress.

Meta seeks redemption in 2024 after botching past elections

Microsoft CEO Satya Nadella (R) and OpenAI CEO Sam Altman (L).

Are Microsoft and OpenAI becoming full-on frenemies? 

Apple CEO Tim Cook.

It’s time for Apple to buy—not build—its way into AI

NBC vice chair Bonnie Hammer.

What seven corporate bosses taught NBC’s Bonnie Hammer about how to navigate change

Binance executive and former IRS agent Tigran Gambaryan.

Binance CEO pleas with Nigeria to release wrongfully detained ‘crime fighter’ exec

Most popular.

the end of the college essay chat gpt

The U.S. economy is headed for a hard landing, and Fed rate cuts won’t be enough to rescue it, Citi says

the end of the college essay chat gpt

Andreessen Horowitz partner says Google is an ‘amazing example’ of employing people in ‘BS jobs’: ‘Half the white-collar staff probably does no real work’

the end of the college essay chat gpt

Former Yahoo CEO Marissa Mayer was Google’s first female engineer—only because she tried to delete a recruiter email and accidentally opened it instead

the end of the college essay chat gpt

‘Gray’ divorce is sky-rocketing among baby boomers. It can wreak havoc on their retirements

the end of the college essay chat gpt

Warren Buffett says he was ‘100% responsible’ for Berkshire Hathaway’s bad bet on Paramount: ‘We lost quite a bit of money’

the end of the college essay chat gpt

Elon Musk blasts obsolete education system for failing to reach kids: ‘You don’t want a teacher in front of a board’

POV: With ChatGPT’s Arrival, Should Educators Be Mourning the End of the College Essay?

A photo of a person sitting at a laptop typing, with a virtual reality overlay popping out of the screen depicting a robot head and many glowing lines and shapes. Over the image are the letters "POV". Photo by iStock/Userba011d64_201

Photo by iStock/Userba011d64_201

“With these advances in AI technology, we have an opportunity to think deeply about our ‘why’ and reevaluate what we can and should be teaching in a changing world”

Naomi caselli, christina dobbs, derry wijaya.

In the weeks since the release of ChatGPT , a new AI technology that can write convincingly humanlike passages of text, we have seen a flood of worry among educators that students will use AI to write their term papers and that “ ChatGPT will make cheating easier than ever .” While this technology will certainly impact how we use writing assignments, plagiarism isn’t high on our list of concerns. Rather than focusing on work-arounds—so we can use the same assignments we have always used—we think we need to have a much bigger conversation. In the midst of a changing world, we need to ask ourselves what we teach students and why, and sort out which learning objectives we should retain, which will become obsolete, and which we should add to the curriculum. 

Automated text generation has the potential to be as transformative as the printing press. Trying to prevent students from using this technology, as the New York City education department has attempted , seems as impossible and unnecessary as trying to force people to travel by horse and buggy after cars were invented. Moreover, trying to maintain the status quo might actually do a disservice to our students. This technology will change many of the professions where our students will ultimately work. They’ll have to navigate a world that makes use of language generation technology once they graduate, and we should let them explore how to use these tools appropriately in spaces designed for learning and exploration.

Instead of trying to preserve writing assignments just because it is how we’ve always done things, let’s look at the purposes these assignments serve and consider how best to achieve these functions. Through writing, students learn to organize their thoughts, draw together evidence, synthesize complex ideas, develop compelling arguments, and much more. While AI can help with some of the mechanics of writing, students will still need to hone many of these skills. In one course this semester, we spent several class meetings on how to prepare and format a research manuscript. If formatting a paper is not a skill students will need in the future, perhaps that time could have been better spent working on these deeper intellectual skills. Writing assignments are also a way for instructors to evaluate what students have learned, and what more we need to teach them. In some cases, we imagine writing assignments are still the best means of serving these functions, but in other cases, other kinds of assignments might be more useful.

Just as society had to build literal guardrails to make driving cars safe, we are going to need to figure out how to prepare students to handle risks inherent to the new technology. With the ability to generate text instantly, we will have to redouble our efforts to teach students how to wade through the coming influx of text to identify misinformation. ChatGPT can make spectacular mistakes that students will need to learn to spot. In an essay where we asked it to write about American Sign Language, it wrote, “A sign for ‘I’m scared’ might be made by pressing a flat hand against the forehead.” This gesture—::facepalm::—is not correct. Students will need to know when AI works well, and when it doesn’t (e.g., in smaller fields, like deaf education, the technology doesn’t have much text to learn from and so it doesn’t work very well). We will need to teach (and create) conventions for properly attributing sources so readers can differentiate between AI- and human-written text. AI is trained to write based on human writing, and so it has learned to reproduce our ableism, racism, sexism, and other biases , especially in response to biased prompts. We will need to teach students how to root out and respond to toxic, biased, and harmful text , no matter how it is produced. Students will also need to learn strategies for getting useful output, and employing it in constructive ways. For fun, we tried to prompt ChatGPT to write this essay, and while the essays were not very creative (they shared much of the same content, structure, and words), the prompt makes a big difference in essay quality. These are just some of the foreseeable skills students will need to learn.

This is hardly the first time educators have had to grapple with how and when to use a new technology, or the first time people have feared that new technologies would destroy students’ thinking. Even Socrates worried that teaching students to write at all would lead to a populace with weak memories. Before calculators and computers, statistics was taught by paper and pencil. Now most statistics courses not only allow students to use computers, they explicitly teach students how to use the software. Certainly, there are times when it is pedagogically important to ask students to put their calculators away, but we have also made space for technology in our instruction. By delegating the tedious parts of the calculations to a computer, statistics courses can spend more time on “why” and delve into much more sophisticated techniques that would be impractical to carry out by hand. With these advances in AI technology, we have an opportunity now to think deeply about our “why” and reevaluate what we can and should be teaching in a changing world. In the end, making our assignments more meaningful to students may actually be a very effective way of dealing with plagiarism, as students are less likely to cheat when they are engaged in work they believe will help them become the future selves they envision.

Note: This essay was generated by human beings, with the help of spell-check. 

Naomi Caselli (Wheelock’09, GRS’10) is a Wheelock College of Education & Human Development assistant professor of deaf studies and deaf education and codirector of BU’s AI and Education Initiative . She can be reached at [email protected] . Christina Dobbs is a Wheelock assistant professor and codirector of the English Education for Equity & Justice program. She can be reached at [email protected] . Derry Wijaya is a College of Arts & Sciences professor of computer science and codirector of the AI and Education Initiative. She can be reached at [email protected]

“POV” is an opinion page that provides timely commentaries from students, faculty, and staff on a variety of issues: on-campus, local, state, national, or international. Anyone interested in submitting a piece, which should be about 700 words long, should contact John O’Rourke at [email protected] . BU Today reserves the right to reject or edit submissions. The views expressed are solely those of the author and are not intended to represent the views of Boston University.

Explore Related Topics:

  • Share this story
  • 9 Comments Add

Naomi Caselli Profile

Christina Dobbs Profile

Derry Wijaya Profile

Comments & Discussion

Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.

There are 9 comments on POV: With ChatGPT’s Arrival, Should Educators Be Mourning the End of the College Essay?

Absolutely great article Thanks for tackling this topic

I still remember grade school teachers warning us against cheating: “The only person you are cheating is yourself, because you didn’t learn the material.”

If the student’s only objective is to complete a class, then AI can probably help with that. However, there is still no substitute for genuine learning.

Remember what the A stands for in AI!

I couldn’t agree more that education reform is a necessary step forward rather than a push back against new technology. It’s important to focus on improving the education system and incorporating technology in a way that enhances the learning experience, rather than being resistant to change. This is the key to ensuring that students are well-equipped for the future.

I really appreciated the unique perspective the writer brought to the piece. It was not at all what I had expected and it made for a refreshing read. Great job!

-this message was written with ChatGPT

I don’t believe that AI will be used to cheat in essays for college. While AI technology is rapidly advancing it still lacks the ability to fully replicate human thought and creativity. Additionally, measures such as plagiarism detection software and the expertise of professors and teachers will likely be able to detect any attempts to cheat via AI.

-This message was written by ChatGPT

I don’t really agree with this. One of the biggest concerns I have is that students might use AI to cheat on assignments, and the article doesn’t really address that. Instead, it talks about how we should embrace this technology in education. But in my opinion, we should be focusing on teaching students how to do their own research and how to avoid plagiarism.

Another thing that worries me is that if we rely too much on AI-generated text, students might stop thinking for themselves and just take whatever the AI tells them. It is important that we still teach students how to think critically and how to write well.

Also, The article doesn’t mention anything about how AI-generated text can be biased. It’s important to remember that the AI is only as unbiased as the data it was trained on. So, it’s crucial to be aware of this when using AI-generated text in any context.

I think that while AI-generated text could be useful in some cases, we should be careful about how we use it in education. We should focus on teaching students how to use it responsibly and how to identify and avoid biases and misinformation.

–message written by ChatGPT

Excellent article. I completely agree that we need to be having these conversations and not get too obsessed with the “plagiarism” aspect. But I would add that we also need to be looking at the proliferation of AI-powered writing tools that integrate tightly into Google Docs and Word, and that encourage what we might call “micro-generations.” There’s a huge gray area here, and we need to help students understand how to use them appropriately and how much help is too much help.

Thank you for the insightful and concise article. You captured it as part of a continuum of challenges educators had to face and will continue to do so in the future. It is a good test for our academic society’s adaptability to waves of change. I envision that these events will determine the projection of an academic environment. I wonder if we may have to rethink “cheating,” which I prefer to think of as misappropriation (misusing resources in our care), requiring a conducive trio of incentive, opportunity, and rationalization. Now that there is a tectonic shift in the landscape, how will institutions and society re-establish a new norm? I would enjoy reading follow-up perspectives as the discussion continues.

This is a great take on ChatGPT, as it is really controversial in classrooms. However, I think it is also important to note that education systems can use it to their benefit. I did a project on this previously and there were many ways to implement ChatGPT in classrooms, like for essay editors and personalized essay prompts. While ChatGPT can’t be prevented to write these essays, we definitely can alter the way essays are assigned. AI is very powerful, but lacks genuine emotion. We can use this to our advantage to prevent cheating and plagiarism to have assignments that require creativity, emotion, and personalized experience. Additionally, many writing assignments rely on other readings, like books and articles. Doing this encourages original thinking and not using AI to write these essays.

As a student who has made use of chatGPT for various tasks, writing essays is not one of them. The voice of an AI pales in comparison to the amount of imagery, emotion, and feeling that you can derive from reading human-generated work. Therefore, while they may pose a threat to certain curriculums, I feel as though teachers will be able to know whether or not such a resource was used in the first place, especially following the implementation of AI detection software for submission platforms like turnitin. I think that the goal for educators should not be to prevent the use of ChatGPT, because it is inevitable. Rather, implementing it into curriculum in a creative way or constructing essay ideas that can only be poorly created by ChatGPT.

Post a comment. Cancel reply

Your email address will not be published. Required fields are marked *

Latest from BU Today

Boston teens pitch biotech concepts to bu “investors” at biological design center’s stem pathways event, dean sandro galea leaving bu’s school of public health for washu opportunity, meet the 2024 john s. perkins award winners, comm ave runway: may edition, stitching together the past, two bu faculty honored with outstanding teaching awards, advice to the class of 2024: “say thank you”, school of visual arts annual bfa thesis exhibitions celebrate works by 33 bfa seniors, q&a: why are so many people leaving massachusetts, killers of the flower moon author, and bu alum, david grann will be bu’s 151st commencement speaker, photos of the month: a look back at april at bu, bu track and field teams compete at 2024 patriot league outdoor championships this weekend, 22 charles river campus faculty promoted to full professor, this year’s commencement speaker we’ll find out thursday morning, the weekender: may 2 to 5, how to have ‘the talk’: what i’ve learned about discussing sex, university, part-time faculty tentatively agree to new four-year contract, pov: campus antisemitism can be addressed by encouraging more speech, not less, does caitlin clark signal a new era in women’s sports, a video tour of myles standish hall.

  • Newsletters
  • Account Activating this button will toggle the display of additional content Account Sign out

A.I. Could Be Great for College Essays

Every year, the artificial intelligence company OpenAI improves its text-writing bot, GPT. And every year, the internet responds with shrieks of woe about the impending end of human-penned prose. This cycle repeated last week when OpenAI launched ChatGPT —a version of GPT that can seemingly spit out any text, from a Mozart-styled piano piece to the history of London in the style of Dr. Seuss . The response on Twitter was unanimous: The college essay is doomed. Why slave over a paper when ChatGPT can write an original for you?

Chatting with ChatGPT is fun. (Go play with it !) But the college essay isn’t doomed, and A.I. like ChatGPT won’t replace flesh and blood writers. They may make writing easier, though.

GPT-3, released by OpenAI in 2020, is the third and best-known version of OpenAI’s Generative Pre-trained Transformer—a computer program known as a large language model. Large language models produce language in response to language—typically, text-based prompts (“Write me a sonnet about love”). Unlike traditional computer programs that execute a series of hard-coded commands, language models are trained by sifting through large datasets of text like Wikipedia. Through this training, they learn patterns in language that are then used to generate the most likely completions to questions or commands.

Language is rife with repetition. Our ability to recognize and remember regularities in speech and text allows us to do things like complete a friend’s sentence or solve a Wordle in three tries. If I asked you to finish the sentence, The ball rolled down the … you’d say hill, and so would GPT-3. Large language models are, like people, great at learning regularities in language, and they use this trick to generate human-like text. But when tested on their ability to understand the language they produce, they often look more like parrots than poets.

Gary Marcus, an emeritus professor at NYU and the author of Rebooting AI , is a vocal critic of the idea that bots like GPT-3 understand what they’re writing . In a recent preprint (an academic paper that hasn’t been peer-reviewed), Marcus and two co-authors used OpenAI’s DALL-E 2 to demonstrate the limitations of large language models. DALL-E 2 , like GPT-3, responds to text-based prompts. But instead of producing language, it creates images. DALL-E 2 was trained to learn associations between images and their captions to produce entirely new images based on the text users enter. If you ask it for, say, “an impressionist painting of Albert Einstein surfing” you get the four images below.

Not bad, right? DALL-E 2 was recently used to win an art competition .

But as Marcus and his co-authors point out in their paper , DALL-E 2 doesn’t understand the relationship between the words that users enter. If, for instance, you ask it to produce an image of “an old man talking to his parents,” you get four images of three people talking. The AI fails to understand that, however old the old man looks, the people he’s talking to must, by definition of being his parents, look older.

Compared with DALL-E 2, ChatGPT appears to have a better grasp of the text users enter and the responses it spits out. This might be why the Internet responded to its release so enthusiastically. ChatGPT “is causing jaws to drop and brows to furrow,” wrote Axios .

I had trouble stumping it with some classic riddles from cognitive psychology. Take the Lily Pad problem:

Prompt: A lily pad, in a pond, doubles in size every day. On the 20 th day, it covers the entire pond. On what day did it cover half the pond?

ChatGPT: A lily pad in a pond that doubles in size every day will cover half the pond on the 19 th day, one day before it covers the entire pond.

The A.I. got that the question described exponential growth—a phenomenon that, as the pandemic demonstrated, many humans fail to understand. But I did fool ChatGPT with the bat and ball problem made famous by Princeton psychologist Daniel Kahneman’s book, Thinking, Fast and Slow :

Prompt: A bat and ball cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?

ChatGPT: If the bat costs $1 more than the ball, and the bat and ball cost $1.10 in total, then the ball must cost $1.10 - $1.00 = $0.10.

The correct answer is 5 cents. (Trust me.)

ChatGPT might fail at tests of reasoning, but it’s great at generating humanlike text to different lengths and in various styles. This brings us back to the college essay—an easy task for ChatGPT, many commentators claimed on Twitter. “We’re witnessing the death of the college essay in realtime,” tweeted one Google employee . Ethan Mollick, a management professor at the University of Pennsylvania, had ChatGPT write an essay question , create a grading rubric for said question, answer the question, and grade its own answer. (It got an A minus.) How could the essay not be doomed?

This isn’t the first time that large language models have been predicted to fell the essay or worse. “To spend ten minutes with Sudowrite [a GPT-3-based A.I.] is to recognize that the undergraduate essay, the basic pedagogical mode of all humanities, will soon be under severe pressure,” wrote journalist Stephen Marche in a 2021 New Yorker piece. (On Tuesday, Marche wrote an article for the Atlantic titled “ The College Essay Is Dead .”) And in 2019, when GPT-2 was created, OpenAI withheld it from the public because the “fear of malicious applications” was too high .

If any group were to put an A.I. to malicious use, essay-burdened undergraduates would surely be the first. But the evidence that A.I. is being used to complete university assignments is hard to find. (When I asked my class of 47 students recently about using A.I. for schoolwork, they looked at me like I was mad.) It could be a matter of time and access before A.I. is used more widely by students to cheat; ChatGPT is the first free text-writing bot from OpenAI (although it won’t be free forever). But it could also be that large language models are just not very good at answering the types of questions professors ask.

If you ask ChatGPT to write an essay contrasting socialism and capitalism, it produces what you expect: 28 grammatical sentences covering wealth distribution, poverty reduction, and employment stability under these two economic systems. But few professors ask students to write papers on broad questions like this. Broad questions lead to a rainbow of responses that are impossible to grade objectively. And the more you make the question like something a student might get—narrow, and focused on specific, course-related content—the worse ChatGPT performs.

I gave ChatGPT a question about the relationship between language and colour perception, that I ask my third-year psychology of language class, and it bombed . Not only did its response lack detail, but it attributed a paper I instructed it to describe to an entirely different study. Several more questions produced the same vague and error-riddled results. If one of my students handed in the text ChatGPT generated, they’d get an F.

Large language models generate the most likely responses based on the text they are fed during training, and, for now, that text doesn’t include the reading lists of thousands of college classes. They also prevaricate. The model’s calculation of the most probable text completion is not always the most correct response—or even a true response. When I asked Gary Marcus about the prospect of ChatGPT writing college essays his answer was blunt: “It’s basically a bullshit artist. And bullshitters rarely get As—they get Cs or worse.”

If these problems are fixed—and, based on how these models work, it’s unclear that they can be—I doubt A.I. like ChatGPT will produce good papers. Even humans who write papers for money struggle to do it well. In 2014, a department of the U.K. government published a study of history and English papers produced by online-essay writing services for senior high school students. Most of the papers received a grade of C or lower. Much like the work of ChatGPT, the papers were vague and error-filled. It’s hard to write a good essay when you lack detailed, course-specific knowledge of the content that led to the essay question.

ChatGPT may fail at writing a passable paper, but it’s a useful pedagogical tool that could help students write papers themselves. Ben Thompson, who runs the technology blog and newsletter Stratechery, wrote about this change in a post about ChatGPT and history homework. Thompson asked ChatGPT to complete his daughter’s assignment on the English philosopher Thomas Hobbes; the A.I. produced three error-riddled paragraphs. But, as Thompson points out, failures like this don’t mean that we should trash the tech. In the future, A.I. like ChatGPT can be used in the classroom to generate text that students then fact-check and edit. That is, these bots solve the problem of the blank page by providing a starting point for papers. I couldn’t agree more.

I frequently used ChatGPT while working on this piece. I asked for definitions that, after a fact-check, I included. At times, I threw entire paragraphs from this piece into ChatGPT to see if it produced prettier prose. Sometimes it did, and then I used that text. Why not? Like spell check, a thesaurus, and Wikipedia, ChatGPT made the task of writing a little easier. I hope my students use it.

Future Tense is a partnership of Slate , New America , and Arizona State University that examines emerging technologies, public policy, and society.

comscore beacon

MIT Technology Review

  • Newsletters

ChatGPT is going to change education, not destroy it

The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.

  • Will Douglas Heaven archive page

A ballpoint pen doodled OpenAi logo is in the center of a lined paper from a school notebook, surrounded by other doodles.

The response from schools and universities was swift and decisive.

Just days after OpenAI dropped ChatGPT in late November 2022, the chatbot was widely denounced as a free essay-writing, test-taking tool that made it laughably easy to cheat on assignments.

Los Angeles Unified, the second-­largest school district in the US, immediately blocked access to OpenAI’s website from its schools’ network. Others soon joined. By January, school districts across the English-speaking world had started banning the software, from Washington, New York, Alabama, and Virginia in the United States to Queensland and New South Wales in Australia.

Several leading universities in the UK, including Imperial College London and the University of Cambridge, issued statements that warned students against using ChatGPT to cheat. 

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-­thinking and problem-solving skills, which are essential for academic and lifelong success,” Jenna Lyle, a spokeswoman for the New York City Department of Education, told the Washington Post in early January.

This initial panic from the education sector was understandable. ChatGPT, available to the public via a web app, can answer questions and generate slick, well-structured blocks of text several thousand words long on almost any topic it is asked about, from string theory to Shakespeare. Each essay it produces is unique, even when it is given the same prompt again, and its authorship is (practically) impossible to spot. It looked as if ChatGPT would undermine the way we test what students have learned, a cornerstone of education.

But three months on, the outlook is a lot less bleak. I spoke to a number of teachers and other educators who are now reevaluating what chatbots like ChatGPT mean for how we teach our kids. Far from being just a dream machine for cheaters, many teachers now believe, ChatGPT could actually help make education better.

Advanced chatbots could be used as powerful classroom aids that make lessons more interactive, teach students media literacy, generate personalized lesson plans, save teachers time on admin, and more.

Educational-tech companies including Duolingo and Quizlet, which makes digital flash cards and practice assessments used by half of all high school students in the US, have already integrated OpenAI’s chatbot into their apps. And OpenAI has worked with educators to put together a fact sheet about ChatGPT’s potential impact in schools. The company says it also consulted educators when it developed a free tool to spot text written by a chatbot (though its accuracy is limited). 

“We believe that educational policy experts should decide what works best for their districts and schools when it comes to the use of new technology,” says Niko Felix, a spokesperson for OpenAI. “We are engaging with educators across the country to inform them of ChatGPT’s capabilities. This is an important conversation to have so that they are aware of the potential benefits and misuse of AI, and so they understand how they might apply it to their classrooms.”

But it will take time and resources for educators to innovate in this way. Many are too overworked, under-resourced, and beholden to strict performance metrics to take advantage of any opportunities that chatbots may present. 

It is far too soon to say what the lasting impact of ChatGPT will be—it hasn’t even been around for a full semester. What’s certain is that essay-writing chatbots are here to stay. And they will only get better at standing in for a student on deadline—more accurate and harder to detect. Banning them is futile, possibly even counterproductive. “We need to be asking what we need to do to prepare young people—learners—for a future world that’s not that far in the future,” says Richard Culatta, CEO of the International Society for Technology in Education (ISTE), a nonprofit that advocates for the use of technology in teaching.

Tech’s ability to revolutionize schools has been overhyped in the past, and it’s easy to get caught up in the excitement around ChatGPT’s transformative potential. But this feels bigger: AI will be in the classroom one way or another. It’s vital that we get it right. 

From ABC to GPT

Much of the early hype around ChatGPT was based on how good it is at test taking. In fact, this was a key point OpenAI touted when it rolled out GPT-4 , the latest version of the large language model that powers the chatbot, in March. It could pass the bar exam! It scored a 1410 on the SAT! It aced the AP tests for biology, art history, environmental science, macroeconomics, psychology, US history, and more. Whew!

It’s little wonder that some school districts totally freaked out.

Yet in hindsight, the immediate calls to ban ChatGPT in schools were a dumb reaction to some very smart software. “People panicked,” says Jessica Stansbury, director of teaching and learning excellence at the University of Baltimore. “We had the wrong conversations instead of thinking, ‘Okay, it’s here. How can we use it?’”

“It was a storm in a teacup,” says David Smith, a professor of bioscience education at Sheffield Hallam University in the UK. Far from using the chatbot to cheat, Smith says, many of his students hadn’t yet heard of the technology until he mentioned it to them: “When I started asking my students about it, they were like, ‘Sorry, what?’”

Even so, teachers are right to see the technology as a game changer. Large language models like OpenAI’s ChatGPT and its successor GPT-4, as well as Google’s Bard and Microsoft’s Bing Chat, are set to have a massive impact on the world. The technology is already being rolled out into consumer and business software. If nothing else, many teachers now recognize that they have an obligation to teach their students about how this new technology works and what it can make possible. “They don’t want it to be vilified,” says Smith. “They want to be taught how to use it.”

Change can be hard. “There’s still some fear,” says Stansbury. “But we do our students a disservice if we get stuck on that fear.”

Stansbury has helped organize workshops at her university to allow faculty and other teaching staff to share their experiences and voice their concerns. She says that some of her colleagues turned up worried about cheating, others about losing their jobs. But talking it out helped. “I think some of the fear that faculty had was because of the media,” she says. “It’s not because of the students.”

In fact, a US survey of 1,002 K–12 teachers and 1,000 students between 12 and 17, commissioned by the Walton Family Foundation in February, found that more than half the teachers had used ChatGPT—10% of them reported using it every day—but only a third of the students. Nearly all those who had used it (88% of teachers and 79% of students) said it had a positive impact.

A majority of teachers and students surveyed also agreed with this statement: “ChatGPT is just another example of why we can’t keep doing things the old way for schools in the modern world.”

Helen Crompton, an associate professor of instructional technology at Old Dominion University in Norfolk, Virginia, hopes that chatbots like ChatGPT will make school better.

Many educators think that schools are stuck in a groove, says Crompton, who was a K–12 teacher for 16 years before becoming a researcher. In a system with too much focus on grading and not enough on learning, ChatGPT is forcing a debate that is overdue. “We’ve long wanted to transform education,” she says. “We’ve been talking about it for years.”

Take cheating. In Crompton’s view, if ChatGPT makes it easy to cheat on an assignment, teachers should throw out the assignment rather than ban the chatbot.

We need to change how we assess learning, says Culatta: “Did ChatGPT kill assessments? They were probably already dead, and they’ve been in zombie mode for a long time. What ChatGPT did was call us out on that.”

Critical thinking

Emily Donahoe, a writing tutor and educational developer at the University of Mississippi, has noticed classroom discussions starting to change in the months since ChatGPT’s release. Although she first started to talk to her undergraduate students about the technology out of a sense of duty, she now thinks that ChatGPT could help teachers shift away from an excessive focus on final results. Getting a class to engage with AI and think critically about what it generates could make teaching feel more human, she says, “rather than asking students to write and perform like robots.”

This idea isn’t new. Generations of teachers have subscribed to a framework known as Bloom’s taxonomy, introduced by the educational psychologist Benjamin Bloom in the 1950s, in which basic knowledge of facts is just the bedrock on which other forms of learning, such as analysis and evaluation, sit. Teachers like Donahoe and Crompton think that chatbots could help teach those other skills. 

In the past, Donahoe would set her students to writing assignments in which they had to make an argument for something—and grade them on the text they turned in. This semester, she asked her students to use ChatGPT to generate an argument and then had them annotate it according to how effective they thought the argument was for a specific audience. Then they turned in a rewrite based on their criticism.

Breaking down the assignment in this way also helps students focus on specific skills without getting sidetracked. Donahoe found, for example, that using ChatGPT to generate a first draft helped some students stop worrying about the blank page and instead focus on the critical phase of the assignment. “It can help you move beyond particular pain points when those pain points aren’t necessarily part of the learning goals of the assignment,” she says.

Smith, the bioscience professor, is also experimenting with ChatGPT assignments. The hand-wringing around it reminds him of the anxiety many teachers experienced a couple of years ago during the pandemic. With students stuck at home, teachers had to find ways to set assignments where solutions were not too easy to Google. But what he found was that Googling—what to ask for and what to make of the results—was itself a skill worth teaching. 

Smith thinks chatbots could be the same way. If his undergraduate students want to use ChatGPT in their written assignments, he will assess the prompt as well as—or even rather than—the essay itself. “Knowing the words to use in a prompt and then understanding the output that comes back is important,” he says. “We need to teach how to do that.”

The new education

These changing attitudes reflect a wider shift in the role that teachers play, says Stansbury. Information that was once dispensed in the classroom is now everywhere: first online, then in chatbots. What educators must now do is show students not only how to find it, but what information to trust and what not to, and how to tell the difference. “Teachers are no longer gatekeepers of information, but facilitators,” she says.

In fact, teachers are finding opportunities in the misinformation and bias that large language models often produce. These shortcomings can kick off productive discussions, says Crompton: “The fact that it’s not perfect is great.”

Teachers are asking students to use ChatGPT to generate text on a topic and then getting them to point out the flaws. In one example that a colleague of Stansbury’s shared at her workshop, students used the bot to generate an essay about the history of the printing press. When its US-centric response included no information about the origins of print in Europe or China, the teacher used that as the starting point for a conversation about bias. “It’s a great way to focus on media literacy,” says Stansbury.

Crompton is working on a study of ways that chatbots can improve teaching. She runs off a list of potential applications she’s excited about, from generating test questions to summarizing information for students with different reading levels to helping with time-­consuming administrative tasks such as drafting emails to colleagues and parents.

One of her favorite uses of the technology is to bring more interactivity into the classroom. Teaching methods that get students to be creative, to role-play, or to think critically lead to a deeper kind of learning than rote memorization, she says. ChatGPT can play the role of a debate opponent and generate counterarguments to a student’s positions, for example. By exposing students to an endless supply of opposing viewpoints, chatbots could help them look for weak points in their own thinking. 

Crompton also notes that if English is not a student’s first language, chatbots can be a big help in drafting text or paraphrasing existing documents, doing a lot to level the playing field. Chatbots also serve students who have specific learning needs, too. Ask ChatGPT to explain Newton’s laws of motion to a student who learns better with images rather than words, for example, and it will generate an explanation that features balls rolling on a table.

Made-to-measure learning

All students can benefit from personalized teaching materials, says Culatta, because everybody has different learning preferences. Teachers might prepare a few different versions of their teaching materials to cover a range of students’ needs. Culatta thinks that chatbots could generate personalized material for 50 or 100 students and make bespoke tutors the norm. “I think in five years the idea of a tool that gives us information that was written for somebody else is going to feel really strange,” he says.

Some ed-tech companies are already doing this. In March, Quizlet updated its app with a feature called Q-Chat, built using ChatGPT, that tailors material to each user’s needs. The app adjusts the difficulty of the questions according to how well students know the material they’re studying and how they prefer to learn. “Q-Chat provides our students with an experience similar to a one-on-one tutor,” says Quizlet’s CEO, Lex Bayer.

In fact, some educators think future textbooks could be bundled with chatbots trained on their contents. Students would have a conversation with the bot about the book’s contents as well as (or instead of) reading it. The chatbot could generate personalized quizzes to coach students on topics they understand less well.

chalkboard where the "T" of G-P-T has been erased and replaced with an "A"

Not all these approaches will be instantly successful, of course. Donahoe and her students came up with guidelines for using ChatGPT together, but “it may be that we get to the end of this class and I think this absolutely did not work,” she says. “This is still an ongoing experiment.”

She has also found that students need considerable support to make sure ChatGPT promotes learning rather than getting in the way of it. Some students find it harder to move beyond the tool’s output and make it their own, she says: “It needs to be a jumping-off point rather than a crutch.”

And, of course, some students will still use ChatGPT to cheat. In fact, it makes it easier than ever. With a deadline looming, who wouldn’t be tempted to get that assignment written at the push of a button? “It equalizes cheating for everyone,” says Crompton. “You don’t have to pay. You don’t have to hack into a school computer.”

Some types of assignments will be harder hit than others, too. ChatGPT is really good at summarizing information. When that is the goal of an assignment, cheating is a legitimate concern, says Donahoe: “It would be virtually indistinguishable from an A answer in that context. It is something we should take seriously.”

None of the educators I spoke to have a fix for that. And not all other fears will be easily allayed. (Donahoe recalls a recent workshop at her university in which faculty were asked what they were planning to do differently after learning about ChatGPT. One faculty member responded: “I think I’ll retire.”)

But nor are teachers as worried as initial reports suggested. Cheating is not a new problem: schools have survived calculators, Google, Wikipedia, essays-for-pay websites, and more.

For now, teachers have been thrown into a radical new experiment. They need support to figure it out—perhaps even government support in the form of money, training, and regulation. But this is not the end of education. It’s a new beginning.

Artificial intelligence

What’s next for generative video.

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

  • Melissa Heikkilä archive page

The AI Act is done. Here’s what will (and won’t) change

The hard work starts now.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

Can ChatGPT get into Harvard? We tested its admissions essay.

ChatGPT’s release a year ago triggered a wave of panic among educators. Now, universities are in the midst of college application season, concerned that students might use the artificial intelligence tool to forge admissions essays.

But is a chatbot-created essay good enough to fool college admissions counselors?

To find out, The Washington Post asked a prompt engineer — an expert at directing AI chatbots — to create college essays using ChatGPT. The chatbot produced two essays: one responding to a question from the Common Application, which thousands of colleges use for admissions, and one answering a prompt used solely for applicants to Harvard University.

We presented these essays to a former Ivy League college admissions counselor, Adam Nguyen, who previously advised students at Harvard University and read admissions essays at Columbia University. We presented Nguyen with a control: a set of real college admissions essays penned by Jasmine Green, a Post intern who used them to get into Harvard University, where she is currently a senior.

We asked Nguyen to read the essays and spot which ones were produced by AI. The results were illuminating.

Can you figure out which one was written by a human?

Who wrote this?

Since kindergarten, I have evaluated myself from the reflection of my teachers. I was the clever, gifted child. I was a pleasure to have in class. I was driven and tenacious - but lazy? Unmotivated? No instructor had ever directed those harsh words at me. My identity as a stellar student had been stripped of its luster; I was destroyed.

Computer science and college admissions experts say that AI-created essays have some easy tells — helpful for admissions officers who are prepping for an uptick in ChatGPT-written essays.

Responses written by ChatGPT often lack specific details, leading to essays that lack supporting evidence for their points. The writing is trite and uses platitudes to explain situations, rather than delving into the emotional experience of the author. The essays are often repetitive and predictable, leaving readers without surprise or a sense of the writer’s journey. If chatbots produce content on issues of race, sex or socioeconomic status, they often employ stereotypes.

At first, Nguyen was impressed by the AI-generated essays: They were readable and mostly free of grammatical errors. But if he was reviewing the essay as part of an application package, he would’ve stopped reading.

“The essay is such a mediocre essay that it would not help the candidate’s application or chances,” he said in an interview. “In fact, it would probably diminish it.”

Here is how Nguyen evaluated ChatGPT’s essay.

Nguyen said that while AI may be sufficient to use for everyday writing, it is particularly unhelpful in creating college admissions essays. To start, he said, admissions offices are using AI screening tools to filter out computer-generated essays. (This technology can be inaccurate and falsely implicate students, a Post analysis found .)

But more importantly, admissions essays are a unique type of writing, he said. They require students to reflect on their life and craft their experiences into a compelling narrative that quickly provides college admissions counselors with a sense of why that person is unique.

“ChatGPT is not there,” he said.

Nguyen understands why AI might be appealing. College application deadlines often fall around the busiest time of the year, near winter holidays and end-of-semester exams. “Students are overwhelmed,” Nguyen said.

But Nguyen isn’t entirely opposed to using AI in the application process. In his current business, Ivy Link, he helps students craft college applications. For those who are weak in writing, he sometimes suggests they use AI chatbots to start the brainstorming process, he said.

For those who can’t resist the urge to use AI for more than just inspiration, there may be consequences.

“Their essays will be terrible,” he said, “and might not even reflect who they are.”

About this story

Jasmine Green contributed to this report.

The Washington Post worked with Benjamin Breen, an associate professor of history at the University of California in Santa Cruz who studies the impact of technological change, to create the AI-generated essays.

Editing by Karly Domb Sadof, Betty Chavarria and Alexis Sobel Fitts.

  • THE UNIVERSITY OF CHICAGO
  • Prospective Students

search

  • Search Search
  • chatgpt letter my students

Two hands typing on the keyboard of a silver laptop computer

On ChatGPT: A Letter to My Students

The rapid development of AI-generated content has led to something of a crisis in higher education.

By Russell P. Johnson | April 6, 2023

Dear Students, By now, you are surely familiar with ChatGPT and other artificial intelligence programs that are capable of composing essays. Perhaps you’ve used it for your college papers—one study shows that, as of this January, thirty percent of college students have. The rapid development of AI-generated content has led to something of a crisis in higher education. The death of the college essay is already being predicted , though some argue that reports of its death have been exaggerated . In February, one of your classmates optimistically suggested that the onset of AI could lead to the end of banal writing prompts. Just as lions make the antelope herd faster by eating the slowest members, so GPT may compel professors to develop better writing prompts without easily automated responses. But developments within the past few weeks suggest that even creative prompts cannot escape the hungry maw of AI. This has led many adaptive professors to welcome our new robot overlords by designing assignments that require thoughtful use of the technology. Now, I could remind you, my dear students, of the severity of what will happen if you get caught using this technology for your assignments. The use of this technology to craft essays is a form of plagiarism, and indeed, the technology itself is plagiaristic. As Bible scholar Anathea Portier-Young writes, “It is a violation of the academic integrity norms and expectations which require that your work be your own. Tools such as ChatGPT harvest the ideas and work of others without giving credit for them.” I could warn you that the arms race between AI programs and AI detection programs has not yet been decisively won, and anyone caught using ChatGPT for their papers will face university disciplinary processes. But instead, I’ll encourage you to think about what will happen if you use AI and don’t get caught. What will you miss out on if you subvert the writing process? Is learning how to write argumentative papers simply a drudgery to be automated away as soon as possible, or is it a spiritual exercise ? When I assign you to write a four-page paper on the Zhuangzi , for instance, it is not because I am under the illusion that you will need that knowledge later in your professional life. Writing a four-page paper on the Zhuangzi will not prepare you to compete in the high-pressure economic landscape of twenty-first-century America. Spending time with Daoist philosophy will, if anything, make you less employable, since CEOs tend not to be thrilled about “non-doing.” No, the reason why I insist you do writing assignments like this is that they give you valuable practice discovering insights and communicating them to others. Reading texts closely, encountering a problem, developing a plausible interpretation, and persuading readers of that interpretation—these are the steps one must go through in order to write a good paper. Going through these steps again and again makes us clearer thinkers and better communicators. Resolving confusions readers could have when reading the Zhuangzi gives us practice imagining how other people see the world and showing them a more accurate representation of how things are. Theologian Jonathan Malesic makes this point well: “Just as important, learning to write trains your imagination to construct the person who will read your words. Writing, then, is an ethical act. It puts you in relation to someone you may not know, someone who may, in fact, not yet exist. When you learn to write, you learn to exercise your responsibility to that person, to meet their needs in a context you cannot fully know.” Learning how to write persuasively involves learning to see the world through another person’s eyes, to identify the limitations of their perspective, and to guide them toward a new one using reasons and evidence. One can certainly develop these capacities outside of college, but college courses provide a supervised, low-stakes environment to practice the rhetorical arts of invention, arrangement, and style. At its best, this training in writing prepares you to employ empathy, imagination, and intellect all at once in the act of communicating with someone who disagrees with you. This practice can have religious significance. According to Buddhist author Charles Johnson, the Buddha’s practice of upaya kausalya involves “adjusting the wisdom to the level on which his listeners can receive it.” It is not enough to know the dharma , one must also know how to share it with those who have not accepted it. Many religions involve a responsibility to show others truths that they may not presently acknowledge. As Ludwig Wittgenstein writes , “To convince someone of the truth, it is not enough to state it, but rather one must find the path from error to truth.” I know it may seem to you that you are simply learning an essay form and applying it repeatedly. But this form mirrors the structure of thought, and a good essay is thoroughly dialogical—anticipating objections, responding to misinterpretations, and providing only what is needed to bring someone from confusion to clarity. Paradoxically, we write essays in isolation so we can arrive at the truth through conversation. Thus, even if you are writing your papers alone in the library at 2:14 am, you are getting better at thinking alongside others, a rare and necessary skill in an age of misinformation and polarization. Furthermore, you pay specific attention to the world when you know you will need to write something. The Chinese playwright Li Yu is quoted as saying that one must keep one’s pen in one’s sleeve as one goes about one’s day, so that when the time comes to write, everything will present itself. If I know that I will have to write a paper about courage in a month, then I start to see courage pop up everywhere—in news articles, in SportsCenter segments, and in ads for cryptocurrency. Writing assignments tune your attention to pick up on aspects of experience you may have glossed over, and as Iris Murdoch argues , writing itself is an act of developing one’s capacity for attention. St. Augustine, John Calvin, Wittgenstein, E.M. Forster, Joan Didion, and Stephen King do not have much in common (students, you may have to trust me on this). But all of them insist they learn what they think through the act of writing. For most of us, writing is a process whereby our ideas become clear, not a process by which we take what is lucid within our minds and translate it to prose. Writing bakes our half-baked ideas. It makes the mind more transparent to itself. One cannot neatly separate writing for self-expression (your own poetry, for instance) from the kind of writing we do for others (your four-page essay). If you use ChatGPT for your writing assignments, you deprive yourself of the benefits of coming to terms with what you actually believe. Making an AI program write a paper for you is like turning on a treadmill, letting it run for 26.2 miles without taking a step, and then claiming that you ran a marathon. You are reinforcing within yourself the notion that a diploma is more important than an education, that getting a grade is more important than learning. Whether you get caught or not, I invite you to examine your priorities, and to consider what you will be missing if you bypass the frustrating but rewarding task of writing. I also want you to think about me. You are responsible, not just to your imagined readers, but to your actual reader. If you submit a paper written by a computer, I have to spend thirty minutes grading a paper written by a computer. I will be investing a half hour of my time—our most irreplaceable resource—gently pointing out the flaws in the reasoning and style, re-reading “your” argument to ensure I understand “your” points, and offering constructive criticism that is both honest and encouraging. That is time I could have spent volunteering at a local charity, watching the sun rise over Lake Michigan, or telling my wife I love her. I am spending part of what Mary Oliver calls my “one wild and precious life” giving you feedback on the papers you submit. I do it because I care about you, and because I believe that even if I am the only one who ever reads it, your writing matters. Image by Glenn Carstens-Peters via Unsplash  

RPJ

Russell P. Johnson

Columnist, Russell Johnson (PhD’19), is Assistant Director of the Undergraduate Religious Studies Program at the University of Chicago Divinity School. His research focuses on antagonism, nonviolence, and the philosophy of communication. 

See All Articles by Russell P. Johnson

The End of Recommendation Letters

Professors, like their students, use ChatGPT to get out of doing their assignments.

Image of an apple and a cup of desk supplies, in pixelated form with 1s and 0s

Early spring greened outside the picture window in the faculty club. I was lunching with a group of fellow professors, and, as happens these days when we assemble, generative artificial intelligence was discussed. Are your students using it? What are you doing to prevent cheating? Heads were shaken in chagrin as iced teas were sipped for comfort.

But then, one of my colleagues wondered: Could he use AI to generate a reference letter for a student? Faculty write loads of these every year, in support of applications for internships, fellowships, industry jobs, graduate school, university posts. They all tend to be more or less the same, yet they also somehow take a lot of time, and saving some of it might be nice. Other, similar ideas spilled out quickly. Maybe ChatGPT could help with grant proposals. Or syllabi, even? The ideas seemed revelatory, but also scandalous.

Scandalous because we faculty, like all faculty everywhere, were drawn into an educators’ panic about AI over the winter. When ChatGPT began to spread around the internet last December, fears of its impact gripped our profession: The college essay is dead ! It’s the end of high-school English ! Students will let computers do their homework ! Task forces were launched to investigate. Syllabi were updated with academic-integrity warnings. Op-eds were written. And now, in the faculty club, we professors were musing over how to automate our own assignments?

Large language models can be pretty bad at generating accurate facts and knowledge. But they’re pretty darn good at creating plausible renditions of the work output you don’t care that much about. It is here, where exhaustion meets nuisance, that AI brings students and faculty together.

Take reference letters. ChatGPT can’t explain why you would (or wouldn’t) recommend a specific individual for a specific role, but it can give you a detailed template. A University of Texas professor I spoke with uses AI as a starting point for both lecture content and reference-letter writing. “Quite generic,” the faculty member reported, “but then the average letter is … ?” I’m withholding the faculty member’s name to protect this person from feared reprisal. A shortcut like this can easily be seen as shirking work, but with so much work to do, maybe something has to give. ChatGPT seemed to cut the time involved in writing letters by half.

“A dirty secret of academe is that most professors have a cache of letters separated into different categories,” says Matt Huculak, another AI-using academic and the head of advanced research services at the University of Victoria libraries. They’ll typically have folders full of excellent, good, and average ones, which can be adjusted and repurposed as appropriate. But Huculak wondered if AI might help break that chain, especially for top students. So he asked ChatGPT to write an “excellent” reference letter, and then, instead of using it as a template, he treated it as an enemy. He opened the ChatGPT output in one window and tried to compose the very opposite of what he saw: an anti-formulaic recommendation letter. “What I wrote ended up feeling like the most ‘human’ and heartfelt letter I’ve written in a long time,” he told me. The student won a prestigious scholarship at Cambridge.

Read: AI isn’t omnipotent. It’s janky.

Nothing was stopping Huculak from applying the same technique to one of his own formulaic letters, striving to produce its inverse. But having a machine “lay the genre bare,” as Huculak put it, somehow gave him the comfort to play around with the material. It also broke him of the terror of the blank page.

Stephanie Kane, who teaches at George Mason University, also told me that ChatGPT upends the difficulty of creating something out of nothing. When she began developing a syllabus for a new class, she asked ChatGPT to generate ideas, “kind of like a rubber duck that talks back.” Kane quickly discovered that ChatGPT can’t be trusted to suggest readings that actually exist, but it could suggest topics or concepts. Kane also asked colleagues on social media, as faculty tend to do, but that burdens her colleagues. “I think ChatGPT was better, honestly. It doesn’t judge, so I could ask any questions I want without being worried of sounding silly or unprepared,” she said.

Huculak and Kane hoped to overcome platitude, but Hank Blumenthal, a film producer who has worked in both industry and academia, looked to ChatGPT to gain more insight into cliché. Having been passed over for academic jobs in his area, Blumenthal wondered if his required position statement on diversity, equity, and inclusion might have been too unusual for their tastes. “My current diversity statement is about all the movies I produced where I hired Black, Asian, female, diverse crew, directors, actors,” he told me. “Still, I think schools want something else.” Given what ChatGPT can do, Blumenthal said, “I was looking to see what might be the expected discourse.”

Read: Meet the world’s least ambitious AI

Blumenthal doesn’t want ChatGPT to take a diversity position on his behalf. Rather, he hopes that it can help him conform to expectations. “I sought the differences between what I had done and the expected versions,” he told me. Likewise, an American University professor I spoke with copped to using AI to generate the formal “assessment criteria” that now must be a part of course and degree proposals, for example. “It did a great job at sounding like the sort of thing someone evaluating a course without knowing anything about the field would want to hear,” the professor said. The generated material was good enough to make it into the actual proposal. (I granted the professor anonymity so that the proposals would not be penalized for incorporating computer-generated text.)

A common lament about large language models holds that, having been trained on piles of existing material, they can’t provide originality. But a professor isn’t often charged with saying something truly new. Much of what we do all day is office work: writing letters, processing forms, compiling reports. AI can tame that labor, or at least offer a feeling of superiority over it.

That may be true for students too. They also feel overwhelmed and overworked: stretched thin by different professors, who each have no idea what the others have demanded; suffocated by tuition costs; confused about their future prospects; and tested by the transition to adulthood. Students come to college first for the college experience , and second to learn and earn credentials. Their faculty may view class assignments as unalloyed goods that would be sullied by a chatbot’s intervention, while students see them as distractions from the work of making sense of who they are. In that respect, AI only helps to clear away annoying obstacles, so we all can move along to doing things that really matter.

  • International edition
  • Australia edition
  • Europe edition

ChatGPT chatbot seen on smartphone placed on laptop.

Lecturers urged to review assessments in UK amid concerns over new AI tool

ChatGPT is capable of producing high-quality essays with minimal human input

  • ChatGPT: what can the extraordinary artificial intelligence chatbot do?

Lecturers at UK universities have been urged to review the way in which their courses are assessed amid concerns that students are already using a potent new AI tool capable of producing high-quality essays with minimal human input.

ChatGPT, the latest chatbot from OpenAI, founded in 2015 by Elon Musk, Sam Altman and others, has only been publicly available for a matter of weeks, but has already triggered concerns about the potential for hard-to-detect plagiarism and questions about the validity of the essay as a future form of assessment.

It has been described as “a gamechanger” that will prove a challenge in universities and schools. Though GCSE and A-level courses are assessed through traditional end-of-course examinations, experts are concerned pupils who use the technology to do their homework will become dependent on AI-generated answers without acquiring the knowledge and skills they need.

Working groups have been set up in university departments to assess the challenge of this latest iteration of AI text-generating technology, with the expectation that methods of assessment in certain courses will have to be updated. Experts admit to feeling both excited and alarmed.

In one case, staff in the computer science department at University College London recently decided to change an assessment. Previously students were offered a choice between an essay-based or skills-based assessment as part of final coursework, but the essay option has been removed.

Geoff Barton, the general secretary of the Association of School and College Leaders, meanwhile, acknowledged that schools would have to get to grips with how to utilise ChatGPT’s benefits while guarding against negative implications.

“As with all technology, there are caveats around making sure that it is used responsibly and not as a licence to cheat, but none of that is insurmountable,” he said. In contrast, New York City schools have already banned the use of ChatGPT on all devices and networks because of concerns it will encourage plagiarism.

Dr Thomas Lancaster, a computer scientist working at Imperial College London, best known for his research into academic integrity, contract cheating and plagiarism, said it was in many ways a game changer. He said: “It’s certainly a major turning point in education where universities have to make big changes.

“They have to adapt sooner rather than later to make sure that students are assessed fairly, that they all compete on a level playing field and that they still have the skills needed beyond university.

“There’s been technology around for several years that will generate text. The big change is that this technology is wrapped up in a very nice interface where you can interact with it, almost like speaking to another human. So it makes it available to a lot of people.”

Because ChatGPT is capable of coming up with countless original combinations of words, Lancaster said it would be more difficult to spot and prove plagiarism, though work is already under way to improve detection software. “It’s an incredibly tricky problem because this has almost appeared out of nowhere … I would be completely shocked if there weren’t students already using GPT for assessments.”

Australian universities have already signalled a return to greater use of in-person exams to try to protect the integrity of assessment. Lancaster said, however, a wholesale return to exams was not the answer, suggesting instead the use of oral presentations and vivas to enable students to demonstrate their grasp of a subject.

Though ChatGPT marks a huge step forward in the evolution of AI text generation, it is not infallible. “It may sound very plausible, but the more detail or facts you need as part of your question, the more likely it is that the algorithm will produce something that looks good, but is completely wrong,” said Michael Draper, professor in legal education at the University of Swansea and an expert on academic integrity and cheating.

He said universities could try to ban it, as they ban other forms of cheating like essay mills, which students pay to produce an original piece of work that they then submit. Draper said: “The other alternative is that you allow students to use it. lf we’re preparing students for the outside world of work and if in the workplace this sort of technology is given to us, then I think we need to embrace it rather than ban it.”

Michael Webb, the director of technology and analytics at Jisc, which provides network and IT services to higher education, agreed. He said: “The knee-jerk reaction might be to block these tools in order to stop students cheating, but that’s neither feasible nor advisable. We should really regard them as simply the next step up from spelling or grammar checkers: technology that can make everyone’s life easier.

after newsletter promotion

“We don’t need to revert to in-person exams: this is a great opportunity for the sector to explore new assessment techniques that measure learners on critical thinking, problem-solving and reasoning skills rather than essay-writing abilities.”

A ChatGPT essay

Q to ChatGPT: Write a university-level essay about the causes of the first world war. The essay should be written from a UK perspective and use UK English.

A: The first world war, also known as the Great War, was a global conflict that lasted from 1914 to 1918 and resulted in the deaths of millions of people. The causes of the war are complex and varied, but can be broadly categorised into long-term, short-term, and immediate causes.

One of the main long-term causes of the war was the competition for colonies and resources between the major European powers. In the late 19th and early 20th centuries, the British Empire and other European nations were expanding their empires and seeking new territories to exploit for resources and markets. This led to tensions between the major powers, as they competed for control of the same areas.

Another long-term cause was the arms race between the major powers. In the years leading up to the war, countries such as Germany, France, and Britain had been investing heavily in their military capabilities, including building large armies and navies. This arms race led to an atmosphere of mistrust and fear between the major powers, as each feared that the others were becoming too powerful.

Short-term causes of the war include the assassination of Archduke Franz Ferdinand of Austria-Hungary in 1914, and the system of alliances that existed between the major European powers. The assassination of the Archduke, which was carried out by a member of a Bosnian-Serbian nationalist group, led to Austria-Hungary declaring war on Serbia, which in turn led to other countries getting involved as a result of their alliances.

Immediate causes of the war include the mobilisation of armies by the major powers in response to the outbreak of war, and the Schlieffen Plan, which was a military strategy developed by Germany to quickly defeat France before turning its attention to Russia.

In conclusion, the causes of the First World War were complex and varied. The competition for colonies and resources between the major European powers, the arms race between the major powers, the assassination of Archduke Franz Ferdinand, the system of alliances, the mobilisation of armies and the Schlieffen Plan all contributed to the outbreak of war in 1914.

  • Artificial intelligence (AI)
  • Universities
  • Consciousness
  • Higher education

Most viewed

Advertisement

Supported by

Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach

With the rise of the popular new chatbot ChatGPT, colleges are restructuring some courses and taking preventive measures.

  • Share full article

University of Florida students sitting and talking on the lawn in front of a tan and brown building.

By Kalley Huang

Kalley Huang, who covers youth and technology from San Francisco, interviewed more than 30 professors, students and university administrators for this article.

While grading essays for his world religions course last month, Antony Aumann, a professor of philosophy at Northern Michigan University, read what he said was easily “the best paper in the class.” It explored the morality of burqa bans with clean paragraphs, fitting examples and rigorous arguments.

A red flag instantly went up.

Mr. Aumann confronted his student over whether he had written the essay himself. The student confessed to using ChatGPT , a chatbot that delivers information, explains concepts and generates ideas in simple sentences — and, in this case, had written the paper.

Alarmed by his discovery, Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students have to explain each revision. Mr. Aumann, who may forgo essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.

“What’s happening in class is no longer going to be, ‘Here are some questions — let’s talk about it between us human beings,’” he said, but instead “it’s like, ‘What also does this alien robot think?’”

Across the country, university professors like Mr. Aumann, department chairs and administrators are starting to overhaul classrooms in response to ChatGPT , prompting a potentially huge shift in teaching and learning. Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.

The moves are part of a real-time grappling with a new technological wave known as generative artificial intelligence . ChatGPT, which was released in November by the artificial intelligence lab OpenAI, is at the forefront of the shift. The chatbot generates eerily articulate and nuanced text in response to short prompts, with people using it to write love letters, poetry, fan fiction — and their schoolwork.

That has upended some middle and high schools, with teachers and administrators trying to discern whether students are using the chatbot to do their schoolwork. Some public school systems, including in New York City and Seattle, have since banned the tool on school Wi-Fi networks and devices to prevent cheating, though students can easily find workarounds to access ChatGPT.

In higher education, colleges and universities have been reluctant to ban the A.I. tool because administrators doubt the move would be effective and they don’t want to infringe on academic freedom. That means the way people teach is changing instead.

“We try to institute general policies that certainly back up the faculty member’s authority to run a class,” instead of targeting specific methods of cheating, said Joe Glover, provost of the University of Florida. “This isn’t going to be the last innovation we have to deal with.”

That’s especially true as generative A.I. is in its early days. OpenAI is expected to soon release another tool, GPT-4, which is better at generating text than previous versions. Google has built LaMDA , a rival chatbot, and Microsoft is discussing a $10 billion investment in OpenAI. Silicon Valley start-ups , including Stability AI and Character.AI , are also working on generative A.I. tools.

An OpenAI spokeswoman said the lab recognized its programs could be used to mislead people and was developing technology to help people identify text generated by ChatGPT.

At many universities, ChatGPT has now vaulted to the top of the agenda. Administrators are establishing task forces and hosting universitywide discussions to respond to the tool, with much of the guidance being to adapt to the technology.

At schools including George Washington University in Washington, D.C., Rutgers University in New Brunswick, N.J., and Appalachian State University in Boone, N.C., professors are phasing out take-home, open-book assignments — which became a dominant method of assessment in the pandemic but now seem vulnerable to chatbots. They are instead opting for in-class assignments, handwritten papers, group work and oral exams.

Gone are prompts like “write five pages about this or that.” Some professors are instead crafting questions that they hope will be too clever for chatbots and asking students to write about their own lives and current events.

Students are “plagiarizing this because the assignments can be plagiarized,” said Sid Dobrin, chair of the English department at the University of Florida.

Frederick Luis Aldama, the humanities chair at the University of Texas at Austin, said he planned to teach newer or more niche texts that ChatGPT might have less information about, such as William Shakespeare’s early sonnets instead of “A Midsummer Night’s Dream.”

The chatbot may motivate “people who lean into canonical, primary texts to actually reach beyond their comfort zones for things that are not online,” he said.

In case the changes fall short of preventing plagiarism, Mr. Aldama and other professors said they planned to institute stricter standards for what they expect from students and how they grade. It is now not enough for an essay to have just a thesis, introduction, supporting paragraphs and a conclusion.

“We need to up our game,” Mr. Aldama said. “The imagination, creativity and innovation of analysis that we usually deem an A paper needs to be trickling down into the B-range papers.”

Universities are also aiming to educate students about the new A.I. tools. The University at Buffalo in New York and Furman University in Greenville, S.C., said they planned to embed a discussion of A.I. tools into required courses that teach entering or freshman students about concepts such as academic integrity.

“We have to add a scenario about this, so students can see a concrete example,” said Kelly Ahuna, who directs the academic integrity office at the University at Buffalo. “We want to prevent things from happening instead of catch them when they happen.”

Other universities are trying to draw boundaries for A.I. Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies so their plagiarism definitions include generative A.I.

John Dyer, vice president for enrollment services and educational technologies at Dallas Theological Seminary, said the language in his seminary’s honor code felt “a little archaic anyway.” He plans to update its plagiarism definition to include: “using text written by a generation system as one’s own (e.g., entering a prompt into an artificial intelligence tool and using the output in a paper).”

The misuse of A.I. tools will most likely not end, so some professors and universities said they planned to use detectors to root out that activity. The plagiarism detection service Turnitin said it would incorporate more features for identifying A.I., including ChatGPT, this year.

More than 6,000 teachers from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect A.I.-generated text, said Edward Tian, its creator and a senior at Princeton University.

Some students see value in embracing A.I. tools to learn. Lizzie Shackney, 27, a student at the University of Pennsylvania’s law school and design school, has started using ChatGPT to brainstorm for papers and debug coding problem sets.

“There are disciplines that want you to share and don’t want you to spin your wheels,” she said, describing her computer science and statistics classes. “The place where my brain is useful is understanding what the code means.”

But she has qualms. ChatGPT, Ms. Shackney said, sometimes incorrectly explains ideas and misquotes sources. The University of Pennsylvania also hasn’t instituted any regulations about the tool, so she doesn’t want to rely on it in case the school bans it or considers it to be cheating, she said.

Other students have no such scruples, sharing on forums like Reddit that they have submitted assignments written and solved by ChatGPT — and sometimes done so for fellow students too. On TikTok, the hashtag #chatgpt has more than 578 million views, with people sharing videos of the tool writing papers and solving coding problems .

One video shows a student copying a multiple choice exam and pasting it into the tool with the caption saying: “I don’t know about y’all but ima just have Chat GPT take my finals. Have fun studying.”

Kalley Huang is a technology reporting fellow based in San Francisco. She graduated from the University of North Carolina at Chapel Hill. More about Kalley Huang

Explore Our Coverage of Artificial Intelligence

News  and Analysis

American and Chinese diplomats plan to meet in Geneva to begin what amounts to the first, tentative arms control talks  over the use of A.I.

Wayve, a London maker of A.I. systems for autonomous vehicles, said that it had raised $1 billion , an illustration of investor optimism about A.I.’s ability to reshape industries.

Eight daily newspapers owned by Alden Global Capital sued OpenAI and Microsoft , accusing the tech companies of illegally using news articles to power their A.I. chatbots.

The Age of A.I.

A new category of apps promises to relieve parents of drudgery, with an assist from A.I.  But a family’s grunt work is more human, and valuable, than it seems.

Despite Mark Zuckerberg’s hope for Meta’s A.I. assistant to be the smartest , it struggles with facts, numbers and web search.

Much as ChatGPT generates poetry, a new A.I. system devises blueprints for microscopic mechanisms  that can edit your DNA.

Which A.I. system writes the best computer code or generates the most realistic image? Right now, there’s no easy way to answer those questions, our technology columnist writes .

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, can you use chatgpt for your college essay.

author image

College Admissions , College Essays

feaeture-robot-writing-studying-AI-cc0

ChatGPT has become a popular topic of conversation since its official launch in November 2022. The artificial intelligence (AI) chatbot can be used for all sorts of things, like having conversations, answering questions, and even crafting complete pieces of writing.

If you’re applying for college, you might be wondering about ChatGPT college admissions’ potential.  Should you use a ChatGPT college essay in your application ? 

By the time you finish reading this article, you’ll know much more about ChatGPT, including how students can use it responsibly and if it’s a good idea to use ChatGPT on college essays . We’ll answer all your questions, like:

  • What is ChatGPT and why are schools talking about it?
  • What are the good and bad aspects of ChatGPT?
  • Should you use ChatGPT for college essays and applications?
  • Can colleges detect ChatGPT?
  • Are there other tools and strategies that students can use, instead?

We’ve got a lot to cover, so let’s get started!

body-robot-teacher-cc0-1

Schools and colleges are worried about how new AI technology affects how students learn. (Don't worry. Robots aren't replacing your teachers...yet.)

What Is ChatGPT and Why Are Schools Talking About It?

ChatGPT (short for “Chat Generative Pre-trained Transformer”) is a chatbot created by OpenAI , an artificial intelligence research company. ChatGPT can be used for various tasks, like having human-like conversations, answering questions, giving recommendations, translating words and phrases—and writing things like essays. 

In order to do this, ChatGPT uses a neural network that’s been trained on thousands of resources to predict relationships between words. When you give ChatGPT a task, it uses that knowledge base to interpret your input or query. It then analyzes its data banks to predict the combinations of words that will best answer your question. 

So while ChatGPT might seem like it’s thinking, it’s actually pulling information from hundreds of thousands of resources , then answering your questions by looking for patterns in that data and predicting which words come next.  

Why Schools Are Concerned About ChatGPT

Unsurprisingly, schools are worried about ChatGPT and its misuse, especially in terms of academic dishonesty and plagiarism . Most schools, including colleges, require students’ work to be 100% their own. That’s because taking someone else’s ideas and passing them off as your own is stealing someone else’s intellectual property and misrepresenting your skills. 

The problem with ChatGPT from schools’ perspective is that it does the writing and research for you, then gives you the final product. In other words, you’re not doing the work it takes to complete an assignment when you’re using ChatGPT , which falls under schools’ plagiarism and dishonesty policies.  

Colleges are also concerned with how ChatGPT will negatively affect students’ critical thinking, research, and writing skills . Essays and other writing assignments are used to measure students’ mastery of the material, and if students submit ChatGPT college essays, teachers will just be giving feedback on an AI’s writing…which doesn’t help the student learn and grow. 

Beyond that, knowing how to write well is an important skill people need to be successful throughout life. Schools believe that if students rely on ChatGPT to write their essays, they’re doing more than just plagiarizing—they’re impacting their ability to succeed in their future careers. 

Many Schools Have Already Banned ChatGPT

Schools have responded surprisingly quickly to AI use, including ChatGPT. Worries about academic dishonesty, plagiarism, and mis/disinformation have led many high schools and colleges to ban the use of ChatGPT . Some schools have begun using AI-detection software for assignment submissions, and some have gone so far as to block students from using ChatGPT on their internet networks. 

It’s likely that schools will begin revising their academic honesty and plagiarism policies to address the use of AI tools like ChatGPT. You’ll want to stay up-to-date with your schools’ policies. 

body-technical-problem-oops-cc0

ChatGPT is pretty amazing...but it's not a great tool for writing college essays. Here's why.

ChatGPT: College Admissions and Entrance Essays

College admissions essays—also called personal statements—ask students to explore important events, experiences, and ideas from their lives. A great entrance essay will explain what makes you you !  

ChatGPT is a machine that doesn’t know and can’t understand your experiences. That means using ChatGPT to write your admissions essays isn’t just unethical. It actually puts you at a disadvantage because ChatGPT can’t adequately showcase what it means to be you. 

Let’s take a look at four ways ChatGPT negatively impacts college admissions essays.

#1: ChatGPT Lacks Insight

We recommend students use u nexpected or slightly unusual topics because they help admissions committees learn more about you and what makes you unique. The chat bot doesn’t know any of that, so nothing ChatGPT writes can’t accurately reflect your experience, passions, or goals for the future. 

Because ChatGPT will make guesses about who you are, it won’t be able to share what makes you unique in a way that resonates with readers. And since that’s what admissions counselors care about, a ChatGPT college essay could negatively impact an otherwise strong application.  

#2: ChatGPT Might Plagiarize 

Writing about experiences that many other people have had isn’t a very strong approach to take for entrance essays . After all, you don’t want to blend in—you want to stand out! 

If you write your essay yourself and include key details about your past experiences and future goals, there’s little risk that you’ll write the same essay as someone else. But if you use ChatGPT—who’s to say someone else won’t, too? Since ChatGPT uses predictive guesses to write essays, there’s a good chance the text it uses in your essay already appeared in someone else’s.  

Additionally, ChatGPT learns from every single interaction it has. So even if your essay isn’t plagiarized, it’s now in the system. That means the next person who uses ChatGPT to write their essay may end up with yours. You’ll still be on the hook for submitting a ChatGPT college essay, and someone else will be in trouble, too.

#3: ChatGPT Doesn’t Understand Emotion 

Keep in mind that ChatGPT can’t experience or imitate emotions, and so its writing samples lack, well, a human touch ! 

A great entrance essay will explore experiences or topics you’re genuinely excited about or proud of . This is your chance to show your chosen schools what you’ve accomplished and how you’ll continue growing and learning, and an essay without emotion would be odd considering that these should be real, lived experiences and passions you have!

#4: ChatGPT Produced Mediocre Results

If you’re still curious what would happen if you submitted a ChatGPT college essay with your application, you’re in luck. Both Business Insider and Forbes asked ChatGPT to write a couple of college entrance essays, and then they sent them to college admissions readers to get their thoughts. 

The readers agreed that the essays would probably pass as being written by real students—assuming admissions committees didn’t use AI detection software—but that they both were about what a “very mediocre, perhaps even a middle school, student would produce.” The admissions professionals agreed that the essays probably wouldn’t perform very well with entrance committees, especially at more selective schools.  

That’s not exactly the reaction you want when an admission committee reads your application materials! So, when it comes to ChatGPT college admissions, it’s best to steer clear and write your admission materials by yourself. 

body-magnifying-glass-icon-cc0

Can Colleges Detect ChatGPT?

We’ve already explained why it’s not a great idea to use ChatGPT to write your college essays and applications , but you may still be wondering: can colleges detect ChatGPT? 

In short, yes, they can! 

Software Can Detect ChatGPT

As technology improves and increases the risk of academic dishonesty, plagiarism, and mis/disinformation, software that can detect such technology is improving, too. For instance, OpenAI, the same company that built ChatGPT, is working on a text classifier that can tell the difference between AI-written text and human-written text .  

Turnitin, one of the most popular plagiarism detectors used by high schools and universities, also recently developed the AI Innovation Lab —a detection software designed to flag submissions that have used AI tools like ChatGPT. Turnitin says that this tool works with 98% confidence in detecting AI writing. 

Plagiarism and AI companies aren’t the only ones interested in AI-detection software. A 22-year old computer science student at Princeton created an app to detect ChatGPT writing, called Zero GPT. This software works by measuring the complexity of ideas and variety of sentence structures.  

Human Readers Can Detect ChatGPT 

It’s also worth keeping in mind that teachers can spot the use of ChatGPT themselves , even if it isn’t confirmed by a software detector. For example, if you’ve turned in one or two essays to your teacher already, they’re probably familiar with your unique writing style. If you submit a college essay draft essay that uses totally different vocabulary, sentence structures, and figures of speech, your teacher will likely take note.

Additionally , admissions committees and readers may be able to spot ChatGPT writing, too. ChatGPT (and AI writing, in general) uses more simplistic sentence structures with less variation, so that could make it easier to tell if you’ve submitted a ChatGPT college essay. These professionals also read thousands of essays every year, which means they know what a typical essay reads like. You want your college essay to catch their attention…but not because you used AI software! 

body-children-celebrating-computer-cc0

If you use ChatGPT responsibly, you can be as happy as these kids.

Pros and Cons of ChatGPT: College Admissions Edition

ChatGPT is a brand new technology, which means we’re still learning about the ways it can benefit us. It’s important to think about the pros and the cons to any new tool …and that includes artificial intelligence!

Let’s look at some of the good—and not-so-good—aspects of ChatGPT below. 

ChatGPT: The Good

It may seem like we’re focused on just the negatives of using ChatGPT in this article, but we’re willing to admit that the chatbot isn’t all bad. In fact, it can be a very useful tool for learning if used responsibly !

Like we already mentioned, students shouldn’t use ChatGPT to write entire essays or assignments. They can use it, though, as a learning tool alongside their own critical thinking and writing skills.

Students can use ChatGPT responsibly to:

  • Learn more about a topic . It’s a great place to get started for general knowledge and ideas about most subjects.
  • Find reputable and relevant sources on a topic. Students can ask ChatGPT for names and information about leading scholars, relevant websites and databases, and more. 
  • Brainstorm ideas for assignments. Students can share the ideas they already have with ChatGPT, and in return, the chatbot can suggest ideas for further exploration and even organization of their points.
  • Check work (that they’ve written themselves!) for errors or cla rity. This is similar to how spell- and grammar-checking software is used. ChatGPT may be even better than some competitors for this, because students can actually ask ChatGPT to explain the errors and their solutions—not just to fix them. 

Before you use ChatGPT—even for the tasks mentioned above—you should talk to your teacher or school about their AI and academic dishonesty policies. It’s also a good idea to include an acknowledgement that you used ChatGPT with an explanation of its use. 

body-man-sad-cc0

This guy made some bad decisions using ChatGPT. Don't be this guy.

ChatGPT: The Bad

The first model of ChatGPT (GPT-3.5) was formally introduced to the public in November 2022, and the newer model (GPT-4) in March 2023. So, it’s still very new and there’s a lot of room for improvement .  

There are many misconceptions about ChatGPT. One of the most extreme is that the AI is all-knowing and can make its own decisions. Another is that ChatGPT is a search engine that, when asked a question, can just surf the web for timely, relevant resources and give you all of that information. Both of these beliefs are incorrect because ChatGPT is limited to the information it’s been given by OpenAI . 

Remember how the ‘PT’ in ChatGPT stands for “Pre-trained”? That means that every time OpenAI gives ChatGPT an update, it’s given more information to work with (and so it has more information to share with you). In other words, it’s “trained” on information so it can give you the most accurate and relevant responses possible—but that information can be limited and biased . Ultimately, humans at OpenAI decide what pieces of information to share with ChatGPT, so it’s only as accurate and reliable as the sources it has access to.

For example, if you were to ask ChatGPT-3.5 what notable headlines made the news last week, it would respond that it doesn’t have access to that information because its most recent update was in September 2021!

You’re probably already familiar with how easy it can be to come across misinformation, misleading and untrue information on the internet. Since ChatGPT can’t tell the difference between what is true and what isn’t, it’s up to the humans at OpenAI to make sure only accurate and true information is given to the chatbot . This leaves room for human error , and users of ChatGPT have to keep that in mind when using and learning from the chatbot.

These are just the most obvious problems with ChatGPT. Some other problems with the chatbot include:

  • A lack of common sense. ChatGPT can create seemingly sensical responses to many questions and topics, but it doesn’t have common sense or complete background knowledge.
  • A lack of empathy. ChatGPT doesn’t have emotions, so it can’t understand them, either. 
  • An inability to make decisions or problem solve . While the chatbot can complete basic tasks like answering questions or giving recommendations, it can’t solve complex tasks. 

While there are some great uses for ChatGPT, it’s certainly not without its flaws.

body-bootcamp-cc0

Our bootcamp can help you put together amazing college essays that help you get into your dream schools—no AI necessary.

What Other Tools and Strategies Can Help Students Besides ChatGPT?

While it’s not a good idea to use ChatGPT for college admissions materials, it’s not the only tool available to help students with college essays and assignments. 

One of the best strategies students can use to write good essays is to make sure they give themselves plenty of time for the assignment. The writing process includes much more than just drafting! Having time to brainstorm ideas, write out a draft, revise it for clarity and completeness, and polish it makes for a much stronger essay. 

Teachers are another great resource students can use, especially for college application essays. Asking a teacher (or two!) for feedback can really help students improve the focus, clarity, and correctness of an essay. It’s also a more interactive way to learn—being able to sit down with a teacher to talk about their feedback can be much more engaging than using other tools. 

Using expert resources during the essay writing process can make a big difference, too. Our article outlines a complete list of strategies for students writing college admission essays. It breaks down what the Common Application essay is, gives tips for choosing the best essay topic, offers strategies for staying focused and being specific, and more.  

You can also get help from people who know the college admissions process best, like former admissions counselors. PrepScholar’s Admissions Bootcamp guides you through the entire application process , and you’ll get insider tips and tricks from real-life admissions counselors that’ll make your applications stand out. Even better, our bootcamp includes step-by-step essay writing guidance , so you can get the help you need to make sure your essay is perfect.  

If you’re hoping for more technological help, Grammarly is another AI tool that can check writing for correctness. It can correct things like misused and misspelled words and grammar mistakes, and it can improve your tone and style. 

It’s also widely available across multiple platforms through a Windows desktop app, an Android and iOS app, and a Google Chrome extension. And since Grammarly just checks your writing without doing any of the work for you, it’s totally safe to use on your college essays. 

The Bottom Line: ChatGPT College Admissions and Essays

ChatGPT will continue to be a popular discussion topic as it continues evolving. You can expect your chosen schools to address ChatGPT and other AI tools in their academic honesty and plagiarism policies in the near future—and maybe even to restrict or ban the use of the chatbot for school admissions and assignments.

As AI continues transforming, so will AI-detection. The goal is to make sure that AI is used responsibly by students so that they’re avoiding plagiarism and building their research, writing, and critical thinking skills. There are some great uses for ChatGPT when used responsibly, but you should always check with your teachers and schools beforehand.

ChatGPT’s “bad” aspects still need improving, and that’s going to take some time.Be aware that the chatbot isn’t even close to perfect, and it needs to be fact-checked just like other sources of information.

Similarly to other school assignments, don’t submit a ChatGPT college essay for college applications, either. College entrance essays should outline unique and interesting personal experiences and ideas, and those can only come from you.  

Just because ChatGPT isn’t a good idea doesn’t mean there aren’t resources to help you put together a great college essay. There are many other tools and strategies you can use instead of ChatGPT , many of which have been around for longer and offer better feedback. 

body-next-future-cc0

What’s Next?

Ready to write your college essays the old-fashioned way? Start here with our comprehensive guide to the admissions essays. 

Most students have to submit essays as part of their Common Application . Here's a complete breakdown of the Common App prompts —and how to answer them. 

The most common type of essay answers the "why this college?" prompt. We've got an expert breakdown that shows you how to write a killer response , step by step. 

Want to write the perfect college application essay?   We can help.   Your dedicated PrepScholar Admissions counselor will help you craft your perfect college essay, from the ground up. We learn your background and interests, brainstorm essay topics, and walk you through the essay drafting process, step-by-step. At the end, you'll have a unique essay to proudly submit to colleges.   Don't leave your college application to chance. Find out more about PrepScholar Admissions now:

Ashley Sufflé Robinson has a Ph.D. in 19th Century English Literature. As a content writer for PrepScholar, Ashley is passionate about giving college-bound students the in-depth information they need to get into the school of their dreams.

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

the end of the college essay chat gpt

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Dunster House, Harvard.

Five faculty named Harvard College Professors

Alan Rheaume and Seray Sener in front of Harvard T.H. Chan School of Public Health.

When your meet-cute happens at T.H. Chan

Nicholas Gonzales poses for a portrait outside of Wasserstein Hall.

No business like show business — except the law

Sam Altman (pictured) speaking to students.

“Telling people not to use ChatGPT is not preparing people for the world of the future,” said Sam Altman, CEO of OpenAI.

Niles Singer/Harvard Staff Photographer

Did student or ChatGPT write that paper? Does it matter?

Sam Altman, CEO of firm that developed app, says ethics do matter, but they need to be rethought (and AI isn’t going away)

Harvard Correspondent

Colleges and universities have been wrestling with concerns over plagiarism and other ethical questions surrounding the use of AI since the emergence of ChatGPT in late 2022.

But Sam Altman, whose company, OpenAI, launched the chatbot app, said during a campus visit Wednesday that AI is such a powerful tool that higher education would be doing its students a disservice by turning its back on it — if that were even possible now. And some of the old rules of ethics will need to be rethought.

“Cheating on homework is obviously bad,” said Altman. “But what we mean by cheating and what the expected rules are does change over time.”

Altman discussed AI in the academy, along with the subtleties of using ChatGPT and other generative AI tools, while at the University to receive the Experiment Cup from Xfund , an early stage venture capital firm. That event was sponsored by the John A. Paulson School for Engineering and Applied Science, Harvard Business School, and the Institute for Business in Global Society ( BiGS ). It featured a conversation between Altman and Xfund co-founder Patrick Chung ’96.

Speaking to the Gazette before the Cup presentation, Altman likened the initial uproar at schools over ChatGPT to the ones that arose after the arrival of calculators and, later, search engines like Google. “People said, ‘We’ve got to ban these because people will just cheat on their homework,’” he said.

Altman, who left Stanford at 19 to start Loopt, a location-sharing social media app, said the reaction to calculators, for instance, was overblown. “If people don’t need to calculate a sine function by hand again … then mathematical education is over,” he said, with a gentle half-smile on his face.

Altman helped launch OpenAI in 2015 and its wildly influential ChatGPT — which can write papers and generate computer programs, among other things — before being removed in 2023 and then reinstated four days later as the company’s CEO.

ChatGPT, he said, has the potential to exponentially increase productivity in the same way calculators freed users from performing calculations by hand, calling the app “a calculator for words.”

He warned, “Telling people not to use ChatGPT is not preparing people for the world of the future.”

Following a bit of back-and-forth about how the ethics of using ChatGPT and other generative AI may differ in various disciplines, Altman came down hard in favor of utility, praising AI’s massive potential in every field.

“Standards are just going to have to evolve,” he said. He dismissed the notion that ChatGPT could be used for writing in the sciences, where the emphasis is on the findings, but not in the humanities, where the expression of ideas is central.

“Writing a paper the old-fashioned way is not going to be the thing,” he said. “Using the tool to best discover and express, to communicate ideas, I think that’s where things are going to go in the future.”

Altman, who last month joined the Department of Homeland Security’s Artificial Intelligence Safety and Security Board , said ethics remains a concern, and one that has yet to be resolved.

“There will be a conversation about what are the absolute limits of the tool, how do we as a society … negotiate ‘Here is what AI systems can never do.’ Where do we set the defaults? How much does an individual user get to move things around within those boundaries? How do we think about different countries’ laws?”

However, that discussion should not slow the development of AI. Instead, Altman described parallel tracks.

“Generally speaking, I do think these are tools that should do what their users want,” he said, before adding an important, if less than specific, caveat: “But there are going to have to be real limits.”

Share this article

You might like.

Scholars recognized for sharing with students ‘particular joy of academic inquiry’

Alan Rheaume and Seray Sener in front of Harvard T.H. Chan School of Public Health.

She worked for Pfizer in Turkey; he was neurosurgery resident in Canada. And graduation and wedding are not far off.

Nicholas Gonzales poses for a portrait outside of Wasserstein Hall.

Nicholas Gonzalez was a child star who felt curiously at home in front of a jury box

How old is too old to run?

No such thing, specialist says — but when your body is trying to tell you something, listen

Alcohol is dangerous. So is ‘alcoholic.’

Researcher explains the human toll of language that makes addiction feel worse

Cease-fire will fail as long as Hamas exists, journalist says

Times opinion writer Bret Stephens also weighs in on campus unrest in final Middle East Dialogues event

ChatGPT: Educational friend or foe?

Subscribe to the center for universal education bulletin, kathy hirsh-pasek and kathy hirsh-pasek senior fellow - global economy and development , center for universal education @kathyandro1 elias blinkoff elias blinkoff research scientist in the department of psychology and neuroscience - temple university @blinkoffe.

January 9, 2023

The invention of the telephone in 1876 was met with simultaneous amazement and trepidation. Critics wondered if phones would disrupt face-to-face communication in ways that made us either too active or lazy. When television entered our homes, we fretted about the potential harms of the box and screen time in every living room. Surely, this would create a society of couch potatoes who do not even notice the people sitting by their side and fail to engage in more important activities. The definition of “ screen time ” was later broadened to include the impacts of digital content and “social media” on children. Indeed, a recent article in The Atlantic by Professor John Haidt warns that the generation raised on social media could even imperil American capitalism and culture.

The latest challenge to the creative human intellect was introduced on November 30th, 2022 by OpenAI. ChatGPT is a conversational bot responsive to users’ questions in ways that allows it to search large databases and to create well-formed essays, legal briefs, poetry in the form of Shakespeare, computer code, or lyrics in the form of Rogers and Hammerstein, to name a few. As New York Times writer Kevin Roose commented, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.”

Used in the right way, ChatGPT can be a friend to the classroom and an amazing tool for our students, not something to be feared.

As with the telephone, however, ChatGPT is primarily being met with amazement and trepidation . Some in education fear that students will never need to learn to write, as they can merely lean on ChatGPT. Writing for The Atlantic, English teacher Daniel Herman worried that ChatGPT spelled “ The End of High School English .” In the same publication, Stephen Marche declared the college essay “ dead .” Fortune Magazine quipped , “Is Chat GPT the end of trust? Will the college essay survive?” On January 3, 2023, the New York City Department of Education took the dramatic step of responding to these fears by blocking access to ChatGPT on all department devices and networks. A department spokesperson justified the decision due to “…concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content.” She further questioned the educational value of the technology, stating: “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.”

Educators , opinion writers , and researchers are engaged in a vibrant discussion about the implications of ChatGPT right now. The emerging consensus is that teachers and professors might be tricked. That is—ChatGPT would surely pass the Turing test. For example, Daniel Herman describes how the program drafted a reasonable college essay, a cover letter to serve as a manager at Starbucks, and even an academic paper comparing two texts. Microbiologist Alex Berezow further discovered that ChatGPT excelled at answering short-response questions from a college-level microbiology quiz. However, the essays produced by ChatGPT are still identifiable as bot-produced, rather than human-produced, due to a few fundamental flaws. The high school English paper that the program composed for Daniel Herman was superficial and lacked references. Other reports indicate that the program includes inaccurate information and fails to provide a compelling perspective , linking the writer and reader.

In our own test, the first author (Kathy) gave the bot a complicated essay question that she asks her Honors psychology students to answer. It did a respectable job. Yet—the bot produced no more than a B- or C+ essay. Why? To date, the bot cannot distinguish the “classic” article in a field that must be cited from any other article that reviews the same content. The bot also tends to keep referencing the same sources over and over again. These are issues that can be easily resolved in the next iteration.

More centrally, however, is that the bot is more of a synthesizer than a critical thinker. It would do well on a compare-and-contrast essay, but is less able to create a unique thesis and to defend that thesis.

As educators, we strive to make our students what John Bruer, former president of the McDonnell Foundation, dubbed knowledge transformers, rather than knowledge digesters. That means that memorization is less valued than critical thinking. In fact, one of the general problems with many educational systems today is that they value learning the facts more than being able to remember information over time, to generalize the learning to new situations and to creatively develop a new way of thinking about an issue. In a world in which all of the information since the beginning of time is said to double every 12 hours, memorization of facts quickly loses its currency.

How can ChatGPT create knowledge transformers?

The question before us is how we can productively use ChatGPT to help our students become knowledge transformers?  A writer , a teacher , and an education professor all suggest an analogy from the calculator and math to ChatGPT and writing. In the same way that calculators became an important tool for students in math classes, ChatGPT has potential to become an important tool for writers who want to hone their critical thinking skills along with their communication skills. How might this happen? Educators are responding with valuable approaches. Adam Stevens, a high school history teacher in New York City who opposes his district’s decision to block ChatGPT, sees it as a valuable tool to promote—not limit—critical thinking. Students can evaluate the program’s initial response to a prompt, then consider how to improve it through revision. Other teachers quoted in a recent report on ChatGPT for Ed Week advocate for a similar approach and using the program to focus on the writing process. In higher education, we can openly let our students use ChatGPT for their class assignments, as well, and even use the bot in class to generate a first draft. Students can then learn how to move beyond the first draft to make their essays better. This is precisely the method that the first author will adopt at the start of her class after winter break.

Deeper, more engaged learning

Our students already know how to use this new tool. They are likely more sophisticated than their teachers at framing the questions and getting solid answers from the bot, even though it was just released. What they need to learn is why—at least for the moment—ChatGPT would get a lower grade than they could get. It is exciting to see how quickly educators are responding to this new reality in the classroom and recognizing the instructional value of ChatGPT for deeper, more engaged learning.

As Adam Stevens remarks, ChatGPT is only a threat if our education system continues to “pursue rubric points and not knowledge.” It is critical for all educators to follow their colleague’s example. As we note in our recent book, “ Making Schools Work ,” the old education model in which teachers deliver information to later be condensed and repeated will not prepare our students for success in the classroom—or the jobs of tomorrow. We should allow that model to die a peaceful death. Used in the right way, ChatGPT can be a friend to the classroom and an amazing tool for our students, not something to be feared.

Related Content

Annelies Goger, Allyson Parco, Rohan Carter-Rau, Jessa Henderson, Kazumi Homma, Ani Meliksetyan, Natalie Milman

December 19, 2022

Emily Gustafsson-Wright, Sarah Osborne, Muskan Aggarwal

November 2, 2022

Kathy Hirsh-Pasek, Jennifer M. Zosh, Helen Shwe Hadani, Roberta Michnick Golinkoff, Kevin Clark, Chip Donohue, Ellen Wartella

February 14, 2022

Education Technology

Artificial Intelligence

Global Economy and Development

Center for Universal Education

Katharine Meyer

May 7, 2024

Jamie Klinenberg, Jon Valant, Nicolas Zerbino

Thinley Choden

May 3, 2024

Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove

  • Two philosopher professors said they caught their students submitting essays written by ChatGPT.
  • They said certain red flags alerted them to the use of AI.
  • If students don't confess to using the program, professors say it can be hard to prove.

Insider Today

A few weeks after the launch of the AI chatbot ChatGPT , Darren Hick, a philosophy professor at Furman University, said he caught a student turning in an AI-generated essay . 

Hick said he grew suspicious when the student turned in an on-topic essay that included some well-written misinformation.

After running it through Open AI's ChatGPT detector , the results said it was 99% likely the essay had been AI-generated. 

Antony Aumann, a religious studies and philosophy professor at Northern Michigan University, told Insider he had caught two students submitting essays written by ChatGPT .

After the writing style set off alarm bells, Aumann submitted them back to the chatbot asking how likely it was that they were written by the program. When the chatbot said it was 99% sure the essays were written by ChatGPT, he forwarded the results to the students.

Both Hick and Aumann said they confronted their students, all of whom eventually confessed to the infraction. Hick's student failed the class and Aumann had his students rewrite the essays from scratch.

'It was really well-written wrong'

There were certain red flags in the essays that alerted the professors to the use of AI. Hick said the essay he found referenced several facts not mentioned in class, and made one nonsensical claim. 

"Word by word it was a well-written essay," he said, but on closer inspection, one claim about the prolific philosopher, David Hume "made no sense" and was "just flatly wrong."

"Really well-written wrong was the biggest red flag," he said.

Related stories

For Aumann, the chatbot just wrote too perfectly. "I think the chat writes better than 95% of my students could ever," he said. 

"All of a sudden you have someone who does not demonstrate the ability to think or write at that level, writing something that follows all the requirements perfectly with sophisticated grammar and complicated thoughts that are directly related to the prompt for the essay," he said.

Christopher Bartel, a professor of philosophy at Appalachian State University, said that while the grammar in AI-generated essays is almost perfect, the substance tends to lack detail.

He said: "They are really fluffy. There's no context, there's no depth or insight."

Hard-to-prove plagiarism  

If students don't confess to using AI for essays, it can leave academics in a tough spot.

Bartel said that some institutions' rules haven't evolved to combat this kind of cheating. If a student decided to dig their heels in and deny the use of AI, it can be difficult to prove. 

Bartel said the AI detectors on offer were "good but not perfect." 

"They give a statistical analysis of how likely the text is to be AI-generated, so that leaves us in a difficult position if our policies are designed so that we have to have definitive and demonstrable proof that the essay is a fake," he said. "If it comes back with a 95% likelihood that the essay is AI generated, there's still a 5% chance that it wasn't." 

In Hick's case, although the detection site said it was "99% certain" the essay had been generated by an AI, he said it wasn't enough for him without a confession.

"The confession was important because everything else looks like circumstantial evidence," he said. "With AI-generated content, there is no material evidence, and material evidence has a lot more weight to it than circumstantial evidence."

Aumann said although he thought the analysis by the chatbot would be good enough proof for disciplinary action, AI plagiarism was still a new challenge for colleges.

He said: "Unlike plagiarism cases of old where you can just say, 'hey, here's the paragraph from Wikipedia.' There is no knockdown proof that you can provide other than the chat says that's the statistical likelihood."

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.

the end of the college essay chat gpt

  • Main content

MBA Watch Logo

So How Did ChatGPT Fare In MBA Essays?

  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn
  • Share on WhatsApp
  • Share on Reddit

the end of the college essay chat gpt

When ChatGPT first exploded into the scene, some MBA candidates considered it a gold mine.

“No more paying $400/hr for a consultant!”

Did that promise come to fruition?

The MBA admissions consulting industry’s first response was pretty lame.

No one should be surprised that if you simply ask ChatGPT to write a Harvard MBA essay, you will end up with a cliché overload like this one, published last spring in Poets&Quants :

“I am thrilled to submit my application for the Harvard Business School MBA program, and I appreciate the opportunity to elaborate on what I believe makes me a strong candidate for the program.

Throughout my academic and professional career, I have developed a passion for business and a desire to make a meaningful impact in the world through my work. My experiences have shaped my perspectives, and I believe they will allow me to bring a unique and valuable perspective to the classroom and beyond.”

the end of the college essay chat gpt

In my June 2023 webinar “Who will be a successful MBA candidate this year” ( Don’t miss this year’s edition! ), I made two predictions. In my experience, candidates consider using AI for their essays to solve two problems.

One is not knowing what to write about.

The other is not knowing how to write. Not trusting their writing abilities.

ChatGPT is not the cure for either of these.

That was my position last year and it continues to be my position today.

All but one of my MBA candidates this cycle were admitted to at least one of their top MBA choices (yes, even the ones who only applied to H/S/W….). Not a single one had to use ChatGPT to write their essays — or had any interest in even trying.

Knowing what to write about can only come from two sources — knowing yourself well (self-reflection and self-awareness) and knowing your schools well.

And when it comes to the actual writing: Remember, it’s not about filling pages with words.

The biggest challenge in MBA essay writing is selecting vignettes from your life and weaving them into your story.

When I met and spoke with the AdComs of the top MBA programs last summer, the consensus was that there is one litmus test: Am I hearing your voice? Am I seeing your personality?

Now, don’t let this message lead you to believe I’m anything but an AI-enthusiast. ChatGPT might not be the essay-writing savior you hoped for, but it’s still got many tricks up its sleeve.

Need help coming up with alternatives for expressing a pesky concept? ChatGPT’s your guy.

Struggling to trim down that word count? It’s got your back.

Plus, it’s a pro at improving readability and keeping those pesky tenses in check.

So yeah, while ChatGPT might not be the end-all-be-all, it’s still a handy tool to have in your MBA application arsenal.

Just remember, your voice and personality are what make you stand out in a sea of applicants.

the end of the college essay chat gpt

Petia Whitmore is the founder of My MBA Path , an admissions consulting firm. She is a former dean of MBA admissions at her alma mater Babson College and the former managing director of The MBA Tour. This commentary was first published at LinkedIn. 

DON’T MISS AN OPEN LETTER TO MBA RECOMMENDERS and HOW CHATGPT ANSWERS MBA ESSAY QUESTIONS

Questions about this article? Email us or leave a comment below.

  • Stay Informed. Sign Up! Login Logout Search for:

the end of the college essay chat gpt

Endless MBA Options: Which Is Right For You?

Dr. Judith Silverman Hodara Fortuna

How To Improve Your MBA Odds If You’re 30+

Karen Marks, president and founder of North Star Admissions Consulting

ADVICE COLUMN: How Do I Navigate The Waitlist?

Advice column: should i accept one of my current mba offers, or try again next year.

  • How To Use Poets&Quants MBA Admissions Consultant Directory
  • How To Select An MBA Admissions Consultant
  • MBA Admission Consulting Claims: How Credible?
  • Suddenly Cozy: MBA Consultants and B-Schools
  • The Cost: $6,850 Result: B-School

Our Partner Sites: Poets&Quants for Execs | Poets&Quants for Undergrads | Tipping the Scales | We See Genius

  • Essay Editor

Synthesis Essay Examples

Synthesis Essay Examples

A synthesis essay is another piece of academic discourse that students often find difficult to write. This assignment indeed requires a more nuanced approach to writing and performing research. It’s particularly relevant to students taking an AP English Language and Composition exam, so learning how to write a synthesis essay is crucial to getting a high score.

This article will explore the definition of a synthesis essay, its functions, and objectives, and provide a tutorial on how to write a synthesis essay.

What is a synthesis essay?

To understand how to write a synthesis essay, we first need to figure out why it is called this way.

The word “synthesis” comes from the Greek language where it means “composition” or “collection.” This means that a synthesis essay can be interpreted as a piece of writing that combines something together. But what?

The Advanced Placement (AP) Program known for developing complex courses for high-school students includes a synthesis essay as one of its Language and Composition exam questions. In it, the AP Program asks students to analyze several sources of information and write an essay that “synthesizes” (or incorporates) evidence from some of the sources.

Thus, a synthesis essay is a written text that explores a certain issue using perspectives derived from multiple different sources.

Synthesis essay: format and objectives

Unlike other types of academic analysis, synthesis questions do not aim to evaluate the overall persuasiveness of your arguments. As a writer, you should aim to analyze, evaluate, and integrate diverse ideas into a coherent whole. Here are some of the skills students need to demonstrate in their synthesis essays:

  • Analyzing sources . Before you learn how to start a synthesis essay, your task is to read and analyze the sources presented to you and understand what they’re about.
  • Assessing the arguments . After familiarizing yourself with the available sources, you are supposed to evaluate if the arguments they support are strong or weak, which will help you determine the course of your essay.
  • Identifying common positions . The next skill you must demonstrate is identifying common positions across the sources. By comparing and contrasting different viewpoints, the writer should be able to detect repeating ideas that contribute to a deeper understanding of the topic.
  • Integrating sources . Your main task in a synthesis essay is combining ideas from different authors to create a cohesive argument. This will help you show how well you can extract information from various sources.

As you see, the chief goal of synthesis questions is to show how well you can analyze sources and derive information from them.

How to start a synthesis essay: tutorial

During an AP examination, you don’t have a lot of time to write the text. It can be stressful, and it’s not rare for students to panic and forget what to do. Don’t worry, with these simple steps, you’ll be able to create a great synthesis essay and ace your exam.

1. Scan the given sources

At first, you will be handed six sources that you’re supposed to briefly examine. These can include academic and newspaper articles, graphs, schedules, prompts, and other documents that can be used to support your future thesis statement.

Remember that you don’t have a lot of time, so take a quick look at the documents and leave short remarks that can help you remember which source supports or argues a certain opinion.

2. Develop your stance

After you’ve studied the sources, it’s time to come up with your stance and thesis statement. Note that, unlike other essays, the stance you must take in your synthesis essay might not correlate with your actual opinion.

Your task is to choose a position that you can support with the sources provided to you. This will showcase your ability to draw an unbiased and logical conclusion from a wide range of references. However, your stance should express an original idea and cannot paraphrase the points given in the source texts.

3. Write your essay

Your essay should start with a two or three-sentence-long introduction that gives background to the topic you’re going to be writing about. It should also include your thesis – the idea based on the evidence you’ve gathered that you’re going to defend in the next part of your essay. Don’t use personal pronouns as a synthesis essay provides an overview of facts instead of your opinion

The body of your synthesis essay should be built of several arguments. Each argument should refer to a specific part of your thesis and provide evidence to support the claims. Use the sources provided to you as evidence to validate your arguments. You should use at least three sources, but the more you incorporate in your text, the better. You can draw arguments and evidence from your background knowledge or include counter-arguments from the remaining sources. When you refer to the original documents, make sure to include the number of the source in brackets at the end of a sentence.

In your conclusion, restate your original thesis and summarize what you stated before. Don’t repeat the same thoughts. Instead, include a new idea you haven’t mentioned before or a call to action to finish your essay properly.

Synthesis essay: examples

The list of sources provided as part of the examination:

  • A New York Times article about the relevance of blue-collar workers;
  • A Washington Post article about the uselessness of art degrees;
  • The Economist’s article about the decreasing wages of college graduates;
  • A New Your Times article proving that college does pay off;
  • An article about a businessman giving money to teens to start businesses instead of going to college
  • A survey on whether college education is worth it

Is college worth it?

In the current era of shifting economic landscapes and evolving societal expectations, the value of higher education has become a subject of intense scrutiny. While some decades ago, a college education was considered the only solution to a better life, nowadays this sentiment is no longer relevant. Higher education can no longer guarantee high salaries and employment, not to mention the unbearable strain it puts on a future graduate’s finances.

The modern world of employment has shifted. While decades ago society needed information-centric professionals, now the situation is different. With the Internet, employers can now find new hires from all over the world with much cheaper salary expectations, leaving local college graduates with no choice but to agree to a lower pay than they expected[3]. This demonstrates the new trend of decreasing rewards for higher education that is very likely to continue in the future.

Another issue is the lack of employment in certain areas. It is no secret that Art and Humanities graduates have a tough time finding positions with adequate pay in the field they studied[2]. Many of them have to search for employment in other fields that have nothing to do with their degrees, which further proves that higher education does not provide job security.

Furthermore, the cost of higher education in America has been the subject of many debates. Even with scholarships and financial aid, many students still find themselves facing daunting loan repayments upon graduation[6]. This financial pressure can delay important milestones such as buying a home, starting a family, or saving for retirement. Additionally, the job market may not always align with graduates' expectations, making it challenging to secure well-paying positions to effectively manage their debt. As a result, the financial impact of college can be felt long after receiving a diploma, shaping the economic landscape of young professionals for years to come.

In conclusion, higher education no longer offers guaranteed employment and financial stability benefits, often leaving graduates with an exorbitant debt they can not afford. Because of this, the governments should reevaluate their current educational and economic policies and develop other areas of education like vocational schools to provide stability to future generations.

Conclusion: Writing a synthesis essay

A synthesis essay tests your ability to conduct objective analysis and derive facts from multiple different references. It helps you learn to put aside your personal bias and provide an objective overview of information even if it contradicts your opinion. To produce a high-scoring synthesis essay, work on your analytical skills and use them to find evidence to defend your position.

If answering synthesis questions gives you trouble, use essay generator Aithor to generate sample essays, learn how to derive main information from source texts, create a plan, and express your thoughts concisely and eloquently.

Related articles

How to write informative essays.

Informative essays are one of the main types of academic writing students must complete as part of the educational process. While this is a typical assignment in any curriculum, it can be hard to distinguish between different types of essays and how to write them. In this article, we’ll delve into the genre of this essay type, learn the definition of an informative essay, and how to write an informational essay. What is an informative essay? An informative essay is a piece of academic writin ...

Learn How to Write a Problem Solution Essay in No Time

For centuries, essays have been used as a creative outlet for expressing one’s thoughts on a variety of topics. Nowadays, these texts help students learn to articulate complex ideas in writing and develop their analytical skills. The most common educational essay is a problem solution essay. This essay type seems straightforward and easy to write. However, students often struggle to plan and how to structure their essays in the most compelling way. This article will help you learn more about t ...

MLA Format Essays: A Comprehensive Guide

Finishing an essay is one thing, but formatting it is a completely different affair. There are many style guides out there, so it can be hard to understand the differences between them. Today, you will learn about MLA format writing, what it is, when it’s used, and how to write MLA format essays. What is an MLA Style Essay? An MLA format essay is a piece of writing created in accordance with the MLA Style Handbook. This guide was developed by the Modern Language Association, the leading profe ...

What Is Chat GPT?

AI GPT chats have been getting a lot of attention over the last year. Not surprising since this new technology promises to change our future completely. The first and most well-known AI GPT chat software is ChatGPT officially released on November 30, 2022. In this article, we will answer the question “What is Chat GPT?”, explore how it works, and find out where to use the Chat GPT model. Chat GPT: definition As the name implies, ChatGPT is a chatbot that uses generative AI to process input p ...

APA or MLA: Choosing the Right Citation Style for Your Paper

When it comes to academic writing, properly citing your sources is crucial. It not only helps you avoid plagiarism but also adds credibility to your work by showing that you've done your research. However, with various citation styles out there, it can be tricky to know which one to use. Two of the most common styles are APA (American Psychological Association) and MLA (Modern Language Association). In this article, we'll take a closer look at the APA vs MLA format to help you decide which is ri ...

Master the Art of Paraphrasing with This Comprehensive Guide

In academic writing, you often need to process a large number of sources to compile a thorough and credible examination. However, you can’t simply copy and paste all information into your text document. In such instances, paraphrasing becomes essential. In this short article, we’ll define paraphrasing, talk about why it’s important, and provide guidance on how to paraphrase in an essay. What is paraphrasing? Paraphrasing is a technique that involves rewriting the source text in a way that pr ...

How To Write Reflection Essays

How often do you contemplate how the tapestry of your experiences shapes your thoughts? A reflection paper lets you explore that. It's like deep diving into your life’s precious moments, examining how stories, books, events, or even lectures have influenced your views. This type of academic essay integrates a personal perspective, allowing you to openly express your opinions. In this guide, we will delve into the specifics of reflective writing, share some tips, and show some self-reflection es ...

How to Write an Essay in APA Format

There are a few styles of organizing and formatting material in an academic essay. To get high grades it is necessary to learn specific characteristics of each one. After reading this article students are certain to figure out how to write in APA format. What is APA format for an essay? It was the American Psychological Association who offered to use the APA style when formatting articles and academic papers. The specialists described all its aspects in a special Publication Manual, printed i ...

  • Newsletters

IE 11 Not Supported

How is chatgpt transforming k-12 education in colorado, teachers are redesigning assignments, administrators are revisiting policies, and students are still finding their footing as they navigate the new frontier of yet another disruptive technology..

A teacher stands at the front of a classroom and gestures at a white board

TEACHING TEACHERS ABOUT GENERATIVE AI

Chatgpt and cheating.

gov-footer-logo-2024.png

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

Will Knight

OpenAI Wants Everyone to Build Their Own Version of ChatGPT

Purple and blue chat bubbles floating out of an illuminated laptop screen

OpenAI’s ChatGPT became a phenomenon thanks to its wide-ranging abilities, such as drafting college essays , writing working computer programs , and digging up information from across the web .

Now the company aims to further widen the range of tricks up ChatGPT’s sleeve by making it possible for anyone to build a custom chatbot powered by the technology—without any coding skills. OpenAI suggests people might want to build custom bots to help with specific problems or interests in their life, such as helping someone learn the rules of a board game, teach their kids math, or help design stickers using AI-generated art.

To create one of these custom bots or AI agents , which OpenAI calls “GPTs,” a user need only specify, by talking with ChatGPT, what they would like the bot to do. Behind the scenes, ChatGPT will write the code needed to create and run the new bot. The bots can plug into other sites and services to do things like access databases, search emails, and automate ecommerce orders, OpenAI says.

“GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others,” OpenAI said in a blog post today . It will launch an online chatbot store later this month where users will be able to find GPTs from “verified builders” chosen by the company.

OpenAI says chatbot builders will be able to make money by charging for access to their GPTs. People who pay for Plus or Enterprise ChatGPT subscriptions will be able to start building chatbots from today. The company says some companies, including biotech company Amgen, management consultancy Bain, and payments processor Square are already building and using GPTs internally.

Screenshot of ChatGPT

OpenAI is launching a kind of app store for chatbots powered by its technology.

The One Thing That’s Holding Back the Heat Pump

Kate O'Flaherty

Ethereum's Cofounder Says SEC Is 'Gaslighting' Everyone About Crypto

Joel Khalili

How to Watch the Boeing Starliner Launch

Boone Ashworth

OpenAI announced its new custom chatbot features at its first ever developer conference, OpenAI DevDay, in San Francisco. “You can build a GPT—a customized version of ChatGPT—for almost anything,” Sam Altman, OpenAI’s CEO, said at the event. “Because they combine instructions, expanded knowledge, and actions, they can be more helpful to you.”

The company said today that more than 2 million developers and over 92 percent of Fortune 500 companies are using its APIs, which provide access to ChatGPT or the underlying text- and image-generating technology in some way. OpenAI says ChatGPT now has around 100 million weekly active users. “About a year ago, we shipped ChatGPT as a low-key research preview—and that went pretty well,” Altman said. “OpenAI is the most advanced and widely-used AI platform in the world now.”

In the year since ChatGPT was launched, the bot has just about upended the tech industry. Google and other Big Tech companies have reorganized their operations to focus on building similar AI tools. Dozens of well-funded startups now offer alternative AI models. And governments have felt compelled to make moves aimed at mitigating misuse of powerful models. The new custom chatbots could see OpenAI jump ahead of its AI competitors once more, if they become popular.

Kanjun Qiu, CEO of Imbue , a startup developing safer, more capable AI agents, says OpenAI's move makes sense because for chatbots to be more powerful they have to be able to take action. But because current AI models have limited powers of reasoning, there are many tasks they can't perform and they don't work reliably without human oversight. “We'll get closer to 'true' agents when we make progress on both,” she says.

Silen Naihin, a developer who previously worked on Auto-GPT, an open source AI agents project, and now leads a startup called Stackwise , says OpenAI's announcements will help it keep a step ahead of startups trying to build more powerful chatbots. But if GPTs take off, they could end up competing with some of OpenAI's own customers, like startups building simple tools on top of its technology.

OpenAI today also announced a new version of its most powerful text-generation model, called GPT-4 Turbo, which includes more recent information from up to April 2023. ChatGPT could previously draw on information from only up to 2021, because that is when the data used to train the underlying AI model was collected. An upgrade in September allowed ChatGPT to access some more recent information by browsing the web.

GPT-4 Turbo will also be able to take in larger amounts of text—equivalent to 300 pages—to use as instructions or to work with, allowing it to search and synthesize information from more documents. OpenAI also says its APIs will now allow models to accept and generate images and audio in the same way ChatGPT was recently upgraded to take in images . And it says that it cut the prices of all its APIs, making the cost for using its most advanced model between a half and a third of what it previously was.

Screenshot of ChatGPT

OpenAI's new tools allow anyone to create their own custom chatbot by chatting with the company's GPT Builder bot.

You Might Also Like …

Navigate election season with our WIRED Politics Lab newsletter and podcast

A hacker took down North Korea’s internet . Now he’s taking off his mask

Blowing the whistle on sexual harassment and assault in Antarctica

This woman will decide which babies are born

Upgrading your Mac? Here’s what you should spend your money on

the end of the college essay chat gpt

Lauren Goode

Meta’s Open Source Llama 3 Is Already Nipping at OpenAI’s Heels

Paresh Dave

Meta Is Already Training a More Powerful Successor to Llama 3

Matt Burgess

What Really Made Geoffrey Hinton Into an AI Doomer

Amanda Hoover

IMAGES

  1. How To Use Chat Gpt To Write An Essay With Ease

    the end of the college essay chat gpt

  2. How To use Chat GPT To Write an Essay

    the end of the college essay chat gpt

  3. Chat GPT Writes Original Essays with Prompting

    the end of the college essay chat gpt

  4. How to use Chat GPT to write an essay or article

    the end of the college essay chat gpt

  5. How to use Chat GPT to Write an Essay: Begginers Tips

    the end of the college essay chat gpt

  6. How to write an Essay Using Chat GPT (without getting caught!!!)

    the end of the college essay chat gpt

VIDEO

  1. Chat GPT

  2. How to use Chat GPT to write an essay or article

  3. How to Use Chat GPT to Write an Essay or Article in 60 Seconds!

  4. How to Write Essay With ChatGPT (EASY!)

  5. Chat GPT Essay (Chat GPT College Essay Writing)

  6. How to write an Essay Using Chat GPT (without getting caught!!!)

COMMENTS

  1. Will ChatGPT Kill the Student Essay?

    The College Essay Is Dead. Nobody is prepared for how AI will transform academia. By Stephen Marche. Paul Spella / The Atlantic; Getty. December 6, 2022. Suppose you are a professor of pedagogy ...

  2. Is ChatGPT the end of the college essay?

    Many are saying such systems mean the end of using any kind of write-at-home essay or report for student assessments. It might mean the end of college application essays and term papers.

  3. What impact will ChatGPT have on the college essay? (opinion)

    It is already clear that ChatGPT is capable of composing a passable essay, and that may be enough to augur the end of the personal essay as an admissions factor. Just how good an essay AI can produce may be dependent on the quality of information given it. My father was a pioneer in the computer field, and I learned early the concept of GIGO ...

  4. POV: With ChatGPT's Arrival, Should Educators Be Mourning the End of

    Students will also need to learn strategies for getting useful output, and employing it in constructive ways. For fun, we tried to prompt ChatGPT to write this essay, and while the essays were not very creative (they shared much of the same content, structure, and words), the prompt makes a big difference in essay quality. These are just some ...

  5. What Students Are Saying About ChatGPT

    Feb. 2, 2023. By now you've probably heard of ChatGPT, a powerful new artificial intelligence chatbot released to the public late last year that can craft jokes and working computer code, guess ...

  6. ChatGPT won't kill the college essay.

    ChatGPT: If the bat costs $1 more than the ball, and the bat and ball cost $1.10 in total, then the ball must cost $1.10 - $1.00 = $0.10. The correct answer is 5 cents. (Trust me.) ChatGPT might ...

  7. ChatGPT Will End High-School English

    Become a Subscriber. Now that might be about to change. The arrival of OpenAI's ChatGPT, a program that generates sophisticated text in response to any prompt you can imagine, may signal the end ...

  8. ChatGPT is going to change education, not destroy it

    Just days after OpenAI dropped ChatGPT in late November 2022, the chatbot was widely denounced as a free essay-writing, test-taking tool that made it laughably easy to cheat on assignments. Los ...

  9. Should Students Let ChatGPT Help Them Write Their College Essays?

    In August, Ms. Barber assigned her 12th-grade students to write college essays. This week, she held class discussions about ChatGPT, cautioning students that using A.I. chatbots to generate ideas ...

  10. Can ChatGPT write a college admission essay? We tested it

    College application deadlines often fall around the busiest time of the year, near winter holidays and end-of-semester exams. "Students are overwhelmed," Nguyen said.

  11. ChatGPT and what we value in writing instruction (opinion)

    Stephen Marche tells us "The College Essay Is Dead," while, in a separate essay for The Atlantic, Daniel Herman considers "The End of High-School English." Even Google seems concerned about sharing its turf. Google! My many years of English courses have taught me to be skeptical of such hyperbole, so I decided to test it out myself.

  12. On ChatGPT: A Letter to My Students

    Dear Students, By now, you are surely familiar with ChatGPT and other artificial intelligence programs that are capable of composing essays. Perhaps you've used it for your college papers—one study shows that, as of this January, thirty percent of college students have. The rapid development of AI-generated content has led to something of a crisis in higher education.

  13. ChatGPT Gives Teachers Ways of 'Cheating' Too

    Professors, like their students, use ChatGPT to get out of doing their assignments. By Ian Bogost. Illustration by Joanne Imperio / The Atlantic. April 20, 2023. Early spring greened outside the ...

  14. Does Chat GPT Mean the End of the College Essay?

    The essay's entire purpose is to discover a student's positive characteristics that the reader might not have learned anywhere else in the application package. As smart as Chat GPT may be, it is certainly no expert in this regard. Writing a reflective personal statement is hard for many students. Some are so used to identifying themselves ...

  15. Lecturers urged to review assessments in UK amid concerns over new AI

    I would be completely shocked if there weren't students already using GPT for assessments." College student claims app can detect essays written by chatbot ChatGPT Read more

  16. Cheating on your college essay with ChatGPT won't get you good grades

    Dr Jacqueline Antonovich, an Assistant Professor at Muhlenberg College, that she put a question from her midterm essays into ChatGPT and the paper it produced "would earn an F. Probably an F- if ...

  17. ChatGPT Wrote College Application Essays. Here's What Experts Thought

    These essays shouldn't end on a neat note, as if the student has it all figured out, and life is perfect," he said. "With all three, I would scrap 80-90% and start over," he said.

  18. How to Write an Essay with ChatGPT

    For example, you can include the writing level (e.g., high school essay, college essay), perspective (e.g., first person) and the type of essay you intend to write (e.g., argumentative, descriptive, expository, or narrative ). You can also mention any facts or viewpoints you've gathered that should be incorporated into the output.

  19. Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach

    The moves are part of a real-time grappling with a new technological wave known as generative artificial intelligence. ChatGPT, which was released in November by the artificial intelligence lab ...

  20. Can You Use ChatGPT for Your College Essay?

    ChatGPT (short for "Chat Generative Pre-trained Transformer") is a chatbot created by OpenAI, an artificial intelligence research company. ChatGPT can be used for various tasks, like having human-like conversations, answering questions, giving recommendations, translating words and phrases—and writing things like essays.

  21. Did student or ChatGPT write that paper? Does it matter?

    He dismissed the notion that ChatGPT could be used for writing in the sciences, where the emphasis is on the findings, but not in the humanities, where the expression of ideas is central. "Writing a paper the old-fashioned way is not going to be the thing," he said. "Using the tool to best discover and express, to communicate ideas, I ...

  22. ChatGPT: Educational friend or foe?

    Writing for The Atlantic, English teacher Daniel Herman worried that ChatGPT spelled "The End of High School English." In the same publication, Stephen Marche declared the college essay ...

  23. Professors Caught Students Cheating on College Essays With ChatGPT

    Jan 14, 2023, 12:00 AM PST. ChatGPT, an AI chatbot, has had the internet in a frenzy since it launched in November. Getty Images. Two philosopher professors said they caught their students ...

  24. Opinion: Will ChatGPT be the end of the college essay?

    Some critics of the technology, such as Stephen Marche, have predicted that ChatGPT will mark the end of the college essay. Others aren't so sure. Ian Bogost, for example, thinks that the AI program produces boring, predictable work that is clearly constructed by inhuman lines of code; at its current best, ChatGPT can't write much more than ...

  25. Poets&Quants

    The MBA admissions consulting industry's first response was pretty lame. No one should be surprised that if you simply ask ChatGPT to write a Harvard MBA essay, you will end up with a cliché overload like this one, published last spring in Poets&Quants: "I am thrilled to submit my application for the Harvard Business School MBA program ...

  26. How ChatGPT Can Make Writing Easier

    First, do a rough draft of your own. Then turn to the bot as a productivity tool and prompt it for a draft. Use any ideas or phrases from the bot and work them into a second draft of your own. Be ...

  27. Detecting ChatGPT-Generated Essays in a Large-Scale ...

    2.3.Linguistic Features. For each human-written or ChatGPT-generated essay, we used e-rater® (Attali & Burstein, 2006) to generate linguistic features to represent the essays.E-rater® is an automated scoring engine that uses Natural Language Processing (NLP) techniques to automatically extract linguistically-motivated features of writing, which can be used to evaluate writing quality and ...

  28. How to Write a Synthesis Essay: Everything You Need to Know Here

    It can be stressful, and it's not rare for students to panic and forget what to do. Don't worry, with these simple steps, you'll be able to create a great synthesis essay and ace your exam. 1. Scan the given sources. At first, you will be handed six sources that you're supposed to briefly examine.

  29. How is ChatGPT Transforming K-12 Education in Colorado?

    Heeding the College Board's Advanced ... responses to users' prompts — when it debuted at the end of ... in the 2022-2023 school year — when Chat GPT surfaced — they dropped even further ...

  30. OpenAI Wants Everyone to Build Their Own Version of ChatGPT

    OpenAI's ChatGPT became a phenomenon thanks to its wide-ranging abilities, such as drafting college essays, writing working computer programs, and digging up information from across the web ...