File:ChatGPT is bullshit, s10676-024-09775-5.pdf
From Wikimedia Commons, the free media repository
Jump to navigation
Jump to search
Size of this JPG preview of this PDF file: 451 × 599 pixels. Other resolutions: 181 × 240 pixels | 361 × 480 pixels | 578 × 768 pixels | 1,239 × 1,645 pixels.
Original file (1,239 × 1,645 pixels, file size: 735 KB, MIME type: application/pdf, 10 pages)
File information
Structured data
Captions
Summary
[edit]DescriptionChatGPT is bullshit, s10676-024-09775-5.pdf |
English: Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems. |
Date | 11 July 2024 |
Source | https://link.springer.com/article/10.1007/s10676-024-09775-5 |
Author | Michael Townsen Hicks, James Humphries & Joe Slater |
Licensing
[edit]This file is licensed under the Creative Commons Attribution 4.0 International license.
- You are free:
- to share – to copy, distribute and transmit the work
- to remix – to adapt the work
- Under the following conditions:
- attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
This file, which was originally posted to an external website, has not yet been reviewed by an administrator or reviewer to confirm that the above license is valid. See Category:License review needed for further instructions.
|
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Thumbnail | Dimensions | User | Comment | |
---|---|---|---|---|---|
current | 10:10, 24 September 2024 | 1,239 × 1,645, 10 pages (735 KB) | Yann (talk | contribs) | {{Information |Description={{en|Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the... |
You cannot overwrite this file.
File usage on Commons
The following page uses this file:
Metadata
This file contains additional information such as Exif metadata which may have been added by the digital camera, scanner, or software program used to create or digitize it. If the file has been modified from its original state, some details such as the timestamp may not fully reflect those of the original file. The timestamp is only as accurate as the clock in the camera, and it may be completely wrong.
Unique ID of original document | adobe:docid:indd:24bb93d9-bd1e-11dd-84db-a83b50d2aaad |
---|---|
Date and time of digitizing | 09:34, 9 July 2024 |
File change date and time | 09:15, 12 July 2024 |
Date metadata was last modified | 09:15, 12 July 2024 |
Software used | Adobe InDesign 18.3 (Windows) |
Conversion program | Adobe PDF Library 17.0; modified using iText® 5.3.5 ©2000-2012 1T3XT BVBA (SPRINGER SBM; licensed version) |
Encrypted | no |
Page size | 595.276 x 790.866 pts |
Version of PDF format | 1.4 |