carbonvilla.blogg.se

Deep fakes porn
Deep fakes porn








deep fakes porn

The main epistemic threat is that deepfakes can easily lead people to acquire false beliefs.

deep fakes porn

But exactly how do deepfakes harm us epistemically? Thus, if I already know that Lester Holt is a reliable testifier, I am justified in believing that these things are true.Īs Floridi suggests, deepfakes seem to be interfering with our ability to acquire knowledge about the world by watching videos. For example, watching Lester Holt on the evening news gives me good evidence that he actually said certain things. And videos can provide almost as good evidence as a face-to-face conversation that X actually said that S is the case.

deep fakes porn

If we know (a) that X is trustworthy and (b) that X said that S is the case, we can be justified in believing that S is the case (see Fallis 2018, 57). But even then, videos can still provide an important epistemic benefit. In such cases, videos are just another way of receiving testimony along with face-to-face conversations, phone calls, and e-mails. We do not learn much directly about the world from such videos, other than that a particular person said a particular thing. Many videos that we watch simply show people (reporters, politicians, teachers, friends, etc.) speaking. Thus, videos are extremely useful when collective agreement on a topic is needed (see Rini 2019). And we are significantly more likely to accept video evidence than other sources of information, such as testimony. 2018), and, most recently, to mass protests around the world (see Stern 2020). For example, videos recorded by smart phones have led to politicians losing elections (see Konstantinides 2013), to police officers being fired and even prosecuted (see Almukhtar et al. Moreover, we make significant decisions based on the knowledge that we acquire from videos. For example, we can find out what is going on at great distances from us by watching videos on the evening news. In such cases, videos are often the next best thing. But we cannot always be at the right place, at the right time, to see things for ourselves. Direct visual perception is one such source. And since we do not have unlimited time and energy to do this, it is useful to have sources of information that we can simply trust without a lot of verifying. In order to survive and flourish, people need to constantly acquire knowledge about the world. I also draw some implications of this analysis for what can be done to address the epistemic threat of deepfakes. Utilizing the account of information carrying recently developed by Brian Skyrms ( 2010), I argue that deepfakes reduce the amount of information that videos carry to viewers. In this paper, I offer an analysis of why deepfakes are such a serious threat to knowledge. 2 As Floridi puts it, “do we really know what we’re watching is real? … Is that really the President of the United States saying what he’s saying?” Philosophers, such as Deborah Johnson, Luciano Floridi, and Regina Rini ( 2019) and Michael LaBossiere ( 2019), have now issued similar warnings.

deep fakes porn

In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not (see Rothman 2018 Schwartz 2018 Warzel 2018 Toews 2020). Notably, the statements or actions of politicians, such as former President Obama, can be, and have been, fabricated (see Chesney and Citron 2019 Toews 2020). 1 But for almost any event, these techniques can be used to create fake videos that are extremely difficult to distinguish from genuine videos. A high profile example is “face-swap porn” in which the faces in pornographic videos are seamlessly replaced with the faces of celebrities (see Cole 2018). Deepfakes tend to depict people saying and doing things that they did not actually say or do. They are not produced by traditional photographic means where the light reflected from a physical object is directed by lenses and mirrors onto a photosensitive surface. Deepfakes are realistic videos created using new machine learning (specifically, deep learning) techniques (see Floridi 2018).










Deep fakes porn