This week, Daniel and Wences put on a great Yosigopublicando (YSP) course about ChatGPT, the new artificial intelligence (AI) software that has disrupted the technology world. While this technology has been investigated for years, it was recently released to the public, and its limitations are currently being explored.
There are many things ChatGPT can do for you, such as writing and summarizing texts, explaining algorithms and concepts, solving problems, and translating between languages, to name a few. This new technology has many applications in the science world; for example, it can be used to better understand lengthy or wordy articles, to create cover letters and resumes, or to proofread texts. However, along with this new technology also comes new ethical dilemmas. Should text written by ChatGPT be cited? Should it even be published? Already, several journals have banned content created by ChatGPT, claiming that it violates the transparency required in the scientific world, and this is likely to become the new norm.
Not only is this technology useful to science, but it also has applications in university settings. Students across the country are already using ChatGPT to write their essays, take their exams, and do their homework. There is a fine line between using this technology to further advance learning versus using it as a form of cheating, and this is currently being explored as AI becomes more widespread. Will we have to stop assigning essays, only give oral and in person exams, and switch to non-AI solvable assignments? ChatGPT is changing the face of education, and it is important to embrace it rather than resist it, as this disruption will only increase as more AI is released.
While ChatGPT is being utilized in many settings, it is important to note that there are many limitations to this software. Its writing skills, especially in the scientific field, are currently subpar, and it has been shown to create false paper and author citations. There is also the potential for it to be hacked to spread insults and misinformation, so it is important to take everything it produces with a grain of salt.
While the future of AI is unclear, this technology is only going to expand in the near future. There is a ton of potential for this software to make positive changes in society, but there must be a functional implementation to avoid harm and misinformation. During the session, Daniel and Wences did a great job exploring these topics, and more information and resources can be found at the following here.