<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>misinformation Archives - SmartFrame</title>
	<atom:link href="https://smartframe.io/blog/tag/misinformation/feed/" rel="self" type="application/rss+xml" />
	<link>https://smartframe.io/blog/tag/misinformation/</link>
	<description>Ideal Presentation, Robust Protection and Easy Monetization</description>
	<lastBuildDate>Wed, 16 Jul 2025 10:51:47 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Image manipulation: Why it’s a problem and what we can do about it</title>
		<link>https://smartframe.io/blog/image-manipulation-why-its-a-problem-and-what-we-can-do-about-it/</link>
		
		<dc:creator><![CDATA[Liam Machin]]></dc:creator>
		<pubDate>Tue, 30 Apr 2024 10:45:09 +0000</pubDate>
				<category><![CDATA[Image security]]></category>
		<category><![CDATA[News & Features]]></category>
		<category><![CDATA[content credentials]]></category>
		<category><![CDATA[disinformation]]></category>
		<category><![CDATA[image security]]></category>
		<category><![CDATA[misinformation]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=117913</guid>

					<description><![CDATA[<p>Tools used to manipulate images are more readily available than ever, and [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/image-manipulation-why-its-a-problem-and-what-we-can-do-about-it/">Image manipulation: Why it’s a problem and what we can do about it</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="117913" class="elementor elementor-117913" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-7df40add e-flex e-con-boxed e-con e-parent" data-id="7df40add" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-6b3c1315 elementor-widget elementor-widget-text-editor" data-id="6b3c1315" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">Tools used to manipulate images are more readily available than ever, and that can be an issue in terms of what makes something trustworthy. So what can be done to rebuild this?</p>
<p>Can you guess <a href="https://www.bbc.co.uk/news/uk-41838386" target="_blank" rel="noopener">2017’s word of the year</a>?</p>
<p>The answer to that burning question is &#8220;fake news&#8221; – thanks in large part, of course, to the US president at that time.</p>
<p>But the fact that we&#8217;re still debating what can be done about disinformation and misinformation years later shows just how significant this issue has become.</p>
<p>When it comes to social media platforms, they rely on people being able to share content freely and easily.</p>
<p>Consequently, such platforms must also deal with the fallout from content created and used for harm.</p>
<p>Although it is worth noting that most of the largest platforms are now making greater efforts to educate users on the subject, a lack of protection and limited visibility over the origin of media means these issues will persist.</p>
<p>If these platforms can rectify this and demonstrate that they take the safety of their users seriously, they can rebuild the trust that has been eroded over the past few years.</p>
<p>With the online world facing the challenge of <a href="https://reutersinstitute.politics.ox.ac.uk/our-research/speed-vs-accuracy-time-crisis" target="_blank" rel="noopener">speed vs accuracy</a>, it&#8217;s perhaps unsurprising that the WEF’s <a href="https://www.weforum.org/publications/global-risks-report-2024/" target="_blank" rel="noopener">2024 Global Risks Report</a> found &#8220;misinformation and disinformation to be the top risk for the world in the next two years.&#8221;</p>
<p>At the heart of this epidemic is image manipulation.</p>
<p>Out-of-context images have repeatedly caused some sense of <a href="https://www.theguardian.com/commentisfree/2020/feb/16/images-death-distress-photograph-publish-social-media" target="_blank" rel="noopener">clouded judgment</a> and made people more susceptible to believing any false narrative that might be attached to an image shared online.</p>
<p>But images that have been heavily altered, or that are entirely fictitious, to begin with, pose even greater issues.</p>
<h4>What is image manipulation?</h4>
<p>Image manipulation refers to the act of adjusting a digital picture in some way.</p>
<p>Often, this is done to help create a certain creative look or to fulfill a business objective. It can, for example, be used to fine-tune details make corrections, or even create entirely new compositions.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="adobestock_190487957_1712915833761" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 6720/4480; max-width: 6720px;"></smartframe-embed></p>
<h4>The history of image manipulation</h4>
<p>The use of manipulated images has a longer history than you might think. While it&#8217;s reasonable to view it as a modern issue, the use of image editing to deceive the public dates back to the 19th century.</p>
<p>It&#8217;s claimed that the first case of image manipulation took place in the early 1860s – and that this particular instance shaped the future of money.</p>
<p>Abraham Lincoln&#8217;s face was edited onto the body of another politician, John Calhoun, to &#8220;<a href="https://www.atlasobscura.com/articles/abraham-lincoln-photos-edited" target="_blank" rel="noopener">distract from his &#8216;gangly&#8217; frame</a>.&#8221; As for the connection with money, this manipulated image was believed to be the basis of Lincoln&#8217;s original five-dollar bill.</p>
<p>The widespread use of image manipulation became particularly noticeable during the early days of <a href="https://fstoppers.com/post-production/pics-manipulated-photos-notable-historic-figures-digital-era-and-after-images-6747" target="_blank" rel="noopener">fascism</a>.</p>
<p>In Nazi Germany, for example, images were frequently edited to change their meaning, often to demonize minorities.</p>
<p>This can also be seen in a <a href="https://fstoppers.com/post-production/pics-manipulated-photos-notable-historic-figures-digital-era-and-after-images-6747" target="_blank" rel="noopener">famous photo of Italian dictator Benito Mussolini</a>, which was edited to remove the horse handler to create a sense of &#8220;heroism&#8221;.</p>
<p>During conflicts, photographs were used to uplift spirits, vilify opponents, and manipulate events, to evoke and exploit the emotions of the public amid the turmoil of war.</p>
<div class="youtube-container"><iframe title="YouTube video player" src="https://www.youtube.com/embed/pT42iph_sRY?si=RaVdy6L8HvrLoiB0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>These examples only scratch the surface of image manipulation&#8217;s complex history and impact on society.</p>
<p>With the constant availability of online content, one might assume that people are careful not to accept everything at face value.</p>
<p>Sadly, this isn&#8217;t the case.</p>
<h4>Why is image manipulation becoming more of a problem?</h4>
<p>Easy accessibility to image editing software, together with the growth of AI tools, means this issue stands to disrupt society in a way we haven&#8217;t seen before.</p>
<p>According to image search engine Everypixel, the growth of AI-generated images has led to more images being created in a single year than humans have produced in over a century, with over <a href="https://journal.everypixel.com/ai-image-statistics" target="_blank" rel="noopener">15 billion images</a> using text-to-image algorithms already generated.</p>
<p>People have also become somewhat desensitized to some degree of image manipulation because it&#8217;s usually used in ways that the average person would deem acceptable, such as for improving profile pictures or Instagram posts.</p>
<p>But in recent times, fake and manipulated images have made headlines for the wrong reasons, such as the Princess of Wales’s <a href="https://time.com/6899993/princess-kate-middleton-photo-forensics-digital-provenance-credentials/" target="_blank" rel="noopener">Mother&#8217;s Day post</a> and Taylor Swift’s <a href="https://www.theverge.com/2024/1/25/24050334/x-twitter-taylor-swift-ai-fake-images-trending" target="_blank" rel="noopener">AI-generated explicit images</a>.</p>
<p>The rise of deepfakes is also being used to create <a href="https://hsfnotes.com/tmt/2024/02/28/deepfakes-in-advertising-whos-behind-the-camera/" target="_blank" rel="noopener">misleading celebrity endorsements,</a> causing scam and fraud headaches for both internet platforms and their users.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="adobestock_316724535_1711457481530" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 4608/3456; max-width: 4608px;"></smartframe-embed></p>
<h4>What can be done to stop image manipulation from spreading fake news?</h4>
<p>It&#8217;s, of course, impossible to stop image manipulation completely. But when it comes to viewing these images online, there are a number of options available to help people identify manipulation and fakery.</p>
<p>Industry standards, comprising standardized verification, clear editing guidelines, and ethical codes, can help fight against the proliferation of fake images.</p>
<p>Some governments have even taken it into their own hands and implemented <a href="https://www.theguardian.com/world/2020/jan/28/fact-from-fiction-finlands-new-lessons-in-combating-fake-news" target="_blank" rel="noopener">education in schools to help people spot fake media</a>, despite the constantly changing nature of the issue making this more difficult.</p>
<p>Social media platforms play an important role too.</p>
<p>To address the proliferation of manipulated images, these platforms could work more closely with fact-checking organizations to verify the legitimacy of shared content.</p>
<p>Digital signatures, watermarking, and other image analysis tools could also be integrated directly into social media platforms to help flag potentially misleading content.</p>
<p>The ease with which images can be stolen is arguably the most important enabler of many of the risks associated with manipulated media.</p>
<p>Part of that problem is that the longstanding JPEG file has been the default format for images since the internet&#8217;s inception.</p>
<p>Yet, it offers no meaningful protection against theft – simply right-click and save, and from there, anyone with some image-editing know-how can manipulate images any way they wish to do so.</p>
<h4>How does Content Credentials intend to influence the future of imagery and image manipulation?</h4>
<p><a href="https://smartframe.io/blog/content-credentials-everything-you-need-to-know/" rel="noopener">Content Credentials</a> give users context on the content they’re met with. This in turn allows them to make better decisions on whether or not the image can be trusted as a source of information.</p>
<p>The Adobe-led initiative allows for proper attribution to all images posted online, ensuring that the original image and its producer are visible, creating <a href="https://smartframe.io/blog/is-a-new-age-of-transparency-on-the-horizon/" target="_blank" rel="noopener">more transparency</a>.</p>
<p>Furthermore, it&#8217;s possible to see how the image was manipulated, including editing history and any use of AI tools.</p>
<div class="youtube-container"><iframe title="YouTube video player" src="https://www.youtube.com/embed/SAJVm9Uq7RE?si=SPdMZQaNTjKZVO4A" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>Additionally, by embedding images to news articles using SmartFrame, images can have all of the above while being protected against both drag-and-drop attempts and right-clicks, with a copyright warning thwarting screenshot attempts too.</p>
<p>And, as every SmartFrame is encrypted and only appears when a user is actively Browse a website, the image disappears as soon as the viewer closes the browser tab or window.</p>
<h4>Image manipulation isn’t new – but the need to highlight when it&#8217;s used is becoming non-negotiable</h4>
<p>The targeting of celebrities and key world events, and the content used for this, clearly show the need for increased regulation and protection.</p>
<p>Through clear labeling, increased education, and added accountability with repercussions, we can help mitigate the spread of misinformation and rebuild trust in the images we see online.</p>
<p>As James Warren, NewsGuard&#8217;s executive editor, <a href="https://www.newsguardrealitycheck.com/p/reality-check-commentary-a-faked?utm_campaign=email-half-post&#038;r=3a021g&#038;utm_source=substack&#038;utm_medium=email" target="_blank" rel="noopener">noted in a recent Substack post</a>: &#8220;Kim Kardashian’s enhanced glam shot on a magazine cover is one thing, fiddling in the slightest with a photo of the Israel-Hamas war is another.&#8221;</p>								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/image-manipulation-why-its-a-problem-and-what-we-can-do-about-it/">Image manipulation: Why it’s a problem and what we can do about it</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>How can publishers drive revenue growth in 2024?</title>
		<link>https://smartframe.io/blog/how-news-publishers-can-drive-revenue/</link>
		
		<dc:creator><![CDATA[Liam Machin]]></dc:creator>
		<pubDate>Mon, 11 Mar 2024 09:15:36 +0000</pubDate>
				<category><![CDATA[News & Features]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[image licensing]]></category>
		<category><![CDATA[misinformation]]></category>
		<category><![CDATA[news]]></category>
		<category><![CDATA[publishers]]></category>
		<category><![CDATA[revenue]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=83079</guid>

					<description><![CDATA[<p>Publishers face a constant battle of staying profitable while maintaining journalistic integrity, [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/how-news-publishers-can-drive-revenue/">How can publishers drive revenue growth in 2024?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="83079" class="elementor elementor-83079" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-53069628 e-flex e-con-boxed e-con e-parent" data-id="53069628" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-69fb9c10 elementor-widget elementor-widget-text-editor" data-id="69fb9c10" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">Publishers face a constant battle of staying profitable while maintaining journalistic integrity, so the need to develop several revenue streams is critical. We break down some of the key challenges and potential solutions.</p>
<p>As more independent publishers emerge each year, the fight for attention is becoming an increasingly important battle for established ones. Losing readers, after all, usually means losing revenue as well.</p>
<p>According to The Guardian, over the last decade, <a href="https://www.theguardian.com/media/2023/mar/26/regional-newspapers-fight-for-survival-in-a-digital-world" target="_blank" rel="noopener">approximately 300 local newspapers in the UK have </a><a href="https://www.theguardian.com/media/2023/mar/26/regional-newspapers-fight-for-survival-in-a-digital-world" target="_blank" rel="noopener">closed</a>, with UK print publication ad revenue <a href="https://pressgazette.co.uk/marketing/global-print-advertising-market-halves-in-six-years-but-publishers-struggling-to-compete-with-online-oligopoly/" target="_blank" rel="noopener">halving over a six year period</a>.</p>
<p>The picture is much the same in the US; <a href="https://apnews.com/article/local-newspapers-closing-jobs-3ad83659a6ee070ae3f39144dd840c1b" target="_blank" rel="noopener">one-third of local news outlets have closed since 2005,</a> mainly thanks to a lack of demand from consumers due to the emergence of other media sources.</p>
<p>Amid the ever-present volatility brought by the explosion of <a href="https://smartframe.io/blog/newsguard-brands-wasting-money-programmatic-advertising-on-ai-generated/">misinformation proliferation</a> and the use of AI technology, the need for adaptability and innovation becomes critical for publishers.</p>
<p>However, the end of <a href="https://smartframe.io/blog/google-and-third-party-cookies-in-2024/">third-party cookie support in Google Chrome</a> presents a new opportunity for both news publishers and media companies to take stock of their revenue streams and better understand the advertisements they’re putting in front of their audiences, optimizing accordingly.</p>
<h4>What are the biggest challenges facing news publishers today?</h4>
<p>In today&#8217;s world, trust is everything. Publishers are working hard to make sure their reports are accurate and to fight false information.</p>
<p>In fact, a <a href="https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf" target="_blank" rel="noopener">World Economic Forum report</a> labeled misinformation and disinformation generated by artificial intelligence as the single biggest threat to the world over the next two years, one that poses a greater risk than even extreme weather events and armed conflict.</p>
<div class="youtube-container"><iframe title="YouTube video player" src="https://www.youtube.com/embed/B4jNttRvbpU?si=0O9LUY6iBtBDh6oz" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>A significant story from this year saw a series of <a href="https://www.nbcnews.com/tech/misinformation/taylor-swift-nude-deepfake-goes-viral-x-platform-rules-rcna135669" target="_blank" rel="noopener">deepfake images of Taylor Swift</a> go viral with over 27 million views in 19 hours before it was taken down.</p>
<p>Not only does this type of manipulation harm the person in question, but if the fake images were incorrectly reported to be genuine by a prominent news outlet, reputational and financial harm could easily follow.</p>
<p>Fortunately, this particular issue was resolved swiftly. Nevertheless, it highlights the importance of having clear and robust fact-checking procedures in place to foster a more transparent ecosystem, one that prioritizes truth, integrity, and audience trust.</p>
<p>While cost-cutting measures and organizational changes are becoming more frequent for many online publishers, they should not compromise the quality of content.</p>
<p>Exploring ways to train and upskill existing employees, or to leverage new technologies, can be an easy way to minimize risk without ramping up costs.</p>
<h4>The decline of print media</h4>
<p>As we have seen, print advertising – once the cornerstone of revenue for many publications – has experienced a strong decline in recent years.</p>
<p>Online media, and the shift of younger audiences away from physical to digital media sources, have taken away much of its demand.</p>
<p>A telling sign can be seen in the total number of actively purchased print publications in the UK, which have fallen by <a href="https://www.theguardian.com/media/2023/jul/03/tipping-point-in-decline-of-magazines-as-one-large-printer-remains-in-uk" target="_blank" rel="noopener">70% between 2010 and 2022</a>.</p>
<p>The same can be seen in the US where. According to <a href="http://pewresearch.org/journalism/fact-sheet/newspapers/" target="_blank" rel="noopener">Pew Research</a>, the daily newspaper circulation (both print and digital) was 20.9 million, falling around 8-10% compared to 2021.</p>
<p>This lack of demand for print news has forced more publications to emphasize online media and the advertising revenue that comes from that, as print faces a dire need to innovate.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="adobestock_278614323_1708683881671" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 6000/4000; max-width: 6000px;"></smartframe-embed></p>
<h4>Online advertising: A necessity for survival</h4>
<p>With online publishing, display advertising has become increasingly predominant in their online reading experience – <a href="https://www.reddit.com/r/britishproblems/comments/13i6lwi/when_british_newspaper_websites_are_so_full_of/" target="_blank" rel="noopener">often leading to complaints</a>.</p>
<p>On top of that, the digital space is undeniably saturated with competition, which makes it challenging for publishers to secure a significant share of online ad revenue as their space is valued against what the advertisers are willing to pay.</p>
<p>A <a href="https://digiday.com/form/the-state-of-publisher-ad-revenue-framing-the-changing-roles-of-the-open-marketplace-direct-sold-and-paths-to-profits-in-2023/" target="_blank" rel="noopener">Digiday and Permutive research report</a> found that 60% of publishers said advertising would account for &#8220;one-fifth or less&#8221; of their annual revenue in 2023.</p>
<p>Programmatic advertising, although efficient in terms of speed and ad placement, often yields relatively low returns when compared to alternative agreements and carries the risk of displaying <a href="https://theconversation.com/why-bad-ads-appear-on-good-websites-a-computer-scientist-explains-178268" target="_blank" rel="noopener">irrelevant or even harmful content</a>.</p>
<p>Despite the possible reach of digital advertising, publishers often struggle to maximize returns, especially with the dominance of programmatic advertising.</p>
<p>The report found that open marketplace deals produced less than 21% of their annual ad revenue, showing the demand for more direct relationships and agreements with brands to generate the best deals for everyone involved – something we’ll go into more detail later.</p>
<h4>What can publishers do to improve engagement and increase revenue?</h4>
<p>There are several different ways in which publishers can increase engagement with their readers and simultaneously boost revenue – which is most effective is up for debate.</p>
<p>The continued evolution of privacy measures, such as the dropping of support for third-party cookies on Chrome, highlights the value of strategic partnerships.</p>
<p>Publishers can increase their revenue and attract more readers by partnering with relevant tech companies, brands, or agencies that can help them promote products and services that their audience already enjoys.</p>
<p>There are numerous success stories, in particular in Norway, whose news outlets are judged to be <a href="https://www.thelocal.no/20170925/why-norwegian-media-lead-the-world-in-digital-subscriptions" target="_blank" rel="noopener">leaders in digital media subscriptions</a>. But these partnerships can go beyond just brand deals and include other news sources.</p>
<p>The New York Times, for example, has developed a <a href="https://pressgazette.co.uk/paywalls/new-york-times-bundle-revenue-growth-strategy/" target="_blank" rel="noopener">unique subscription bundle</a> that combines other areas of its business to offer potential customers a more rounded package.</p>
<p>It includes news, games, recipes, audio &#038; podcasts, reviews via Wirecutter, and sports coverage via The Athletic.</p>
<p>This bundle has helped the organization achieve an annual digital subscription revenue of more than $1bn for the <a href="https://www.nytimes.com/2024/02/07/business/media/new-york-times-q4-earnings.html#:~:text=At%20the%20end%20of%20the,million%20of%20them%20digital%2Donly.&#038;text=The%20New%20York%20Times%20Company,billion%20for%20the%20first%20time." target="_blank" rel="noopener">first time in its history</a>, with 9.7 million of them digital-only.</p>
<p>These models aren&#8217;t new but by creating more holistic packages it offers additional value to customers and gives ample opportunity to promote affiliated businesses.</p>
<h4>Understanding audience preferences</h4>
<p>Already, most publishers have embraced a multi-platform content strategy to engage their audience segments effectively.</p>
<p>Yet, this type of audience segmentation can also dictate decisions about what brands and advertisements should feature onsite.</p>
<p>Research from the Digiday and Permutive paper found that 65% of those interviewed said data and analytics have the greatest impact on driving positive ad revenue outcomes.</p>
<p>As well as all this, direct ad sales, curated marketplaces, and sponsored content partnerships all provide more tailored opportunities for publishers to have greater control over ad placement suited for their audience.</p>
<p>Adopting other forms of advertising, such as <a href="https://smartframe.io/blog/in-image-advertising-how-it-works-and-faq/">in-image advertising</a> and augmented reality ads, also presents new options to promote any advertisements effectively.</p>
<p>News outlets can&#8217;t control what brands make, but stronger relationships with brands and agencies promote more free sharing of ideas to achieve the best results for both parties.</p>
<h4>How does SmartFrame help news publishers generate more revenue?</h4>
<p>At SmartFrame, we are committed to creating the most sustainable ecosystem for all those involved with photography and publishing.</p>
<p>We&#8217;ve designed an ecosystem where all parties receive due recognition for their contributions.</p>
<p>Our publisher partners enjoy unrestricted access to a vast library of exclusive and historical imagery through an ad-funded model.</p>
<p>Here&#8217;s an example of such historic images that are only accessible in this format through our <a href="https://images.nzrugby.co.nz/" target="_blank" rel="noopener">New Zealand Rugby library</a>.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script></p>
<p>This grants them license-free access to streamed image embeds for online usage, together with JPEG access for use in print and where an embed cannot be used.</p>
<p>Moreover, publishers maintain full control over the ad campaigns, with each campaign undergoing approval before it&#8217;s put into place. This ensures that publishers retain control over the content featured on their platforms.</p>
<p>By <a href="https://smartframe.io/publishers/" target="_blank" rel="noopener">embedding our images</a> into their content, publishers benefit from the advertising that’s layered over the image in a non-intrusive manner.</p>
<p>Think of it as a new advertising billboard seamlessly integrated into an article that enriches the content with exclusive imagery and provides contextually relevant ads without disrupting the reader&#8217;s attention or <a href="https://smartframe.io/blog/online-publishers-increase-optimize-page-speed-without-plugins/" target="_blank" rel="noopener">load times</a>.</p>								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/how-news-publishers-can-drive-revenue/">How can publishers drive revenue growth in 2024?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Is a new age of transparency on the horizon?</title>
		<link>https://smartframe.io/blog/is-a-new-age-of-transparency-on-the-horizon/</link>
		
		<dc:creator><![CDATA[Matt Golowczynski]]></dc:creator>
		<pubDate>Tue, 16 Jan 2024 11:54:37 +0000</pubDate>
				<category><![CDATA[News & Features]]></category>
		<category><![CDATA[advertising]]></category>
		<category><![CDATA[content credentials]]></category>
		<category><![CDATA[disinformation]]></category>
		<category><![CDATA[misinformation]]></category>
		<category><![CDATA[transparency]]></category>
		<category><![CDATA[trust]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=82745</guid>

					<description><![CDATA[<p>Growing uncertainty around what we can trust online has given rise to [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/is-a-new-age-of-transparency-on-the-horizon/">Is a new age of transparency on the horizon?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="82745" class="elementor elementor-82745" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-70952fa8 e-flex e-con-boxed e-con e-parent" data-id="70952fa8" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-707047ec elementor-widget elementor-widget-text-editor" data-id="707047ec" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">Growing uncertainty around what we can trust online has given rise to a number of initiatives whose goal is to improve transparency. But is this indicative of a broader trend?</p>
<p>Late last year, online dictionary Merriam-Webster announced that the word “authentic” had the honor of being its <a href="https://www.merriam-webster.com/wordplay/word-of-the-year" target="_blank" rel="noopener">2023 Word Of The Year</a>. </p>
<p>To many, this might not have come as a surprise, given the events of the last twelve months. But with runners-up such as &#8220;deepfake&#8221; and &#8220;dystopian,&#8221; and these results reflecting search volumes on the site over the past year, there is an obvious temptation to draw conclusions as to the public’s mood and focus. </p>
<p>Nevertheless, as the issues that led to these searches persist beyond the year&#8217;s end, it seems likely that tools and technologies that serve as a mark of authenticity will continue to receive attention.</p>
<h4>Building trust</h4>
<p>One reason for this is that the matter is just as crucial to publishers of online content as it is to the audiences that consume it. For the latter, understanding what’s authentic is important as people want to be assured that they are receiving accurate information about the world. But for the former, it’s their entire reputation that’s at stake. </p>
<p>The demands of rolling news coverage, together with a reliance on citizen journalism and a constant stream of newer publishers challenging so-called legacy media, leave ample room for mistakes. While the consequences of these may often be insignificant, they can easily invite accusations of bias, particularly in the reporting of global conflicts and health-related matters. It’s not just a question of rigorous fact-checking, but being able to do so at speed. Get both right and the prize is the public’s trust.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="adobestock_454209510_1704897589140" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 4760/3200; max-width: 4760px;"></smartframe-embed></p>
<p><script async src="https://static.smartframe.io/embed.js"></script></p>
<p>Publishers have always been conscious of this, but with respect to visual content, and the ease with which visual content can be manipulated and presented out of context, there’s an imperative to demonstrate this kind of trustworthiness in a more tangible way.</p>
<p>Any organization can claim to be trustworthy, but it takes little more than anonymous replies on social media posts with links to alternative sources of information to undermine this. So the logical response is to make the process of assessing this information as transparent as possible.</p>
<p><strong>Read more:</strong> <a href="https://smartframe.io/blog/online-publishers-increase-optimize-page-speed-without-plugins/" target="_blank" rel="noopener">How can publishers increase and optimize page speed?</a></p>
<p>Earlier this year, for example, <a href="https://www.bbc.co.uk/news/uk-65650822" target="_blank" rel="noopener">the BBC launched a new BBC Verify brand</a>, which comprises around 60 journalists working in a dedicated space &#8220;with a range of forensic investigative skills and open source intelligence (Osint) capabilities at their fingertips&#8221;. These include Analysis Editor Ros Atkins, who is best known for his <a href="https://www.bbc.co.uk/iplayer/episodes/p095rjk1/ros-atkins-on" target="_blank" rel="noopener">Ros Atkins On…</a> series that distills complex issues into short video explainers.</p>
<p>The BBC states that, rather than sorting content into verified and unverified buckets, BBC Verify aims to actually explain the verification that has taken place. This allows the viewer to assess whether its methods are sound, rather than simply take its word that a piece of content is considered genuine.</p>
<p>Similarly, the <a href="https://smartframe.io/blog/content-credentials-everything-you-need-to-know/" target="_blank" rel="noopener">Content Credentials</a> tool from the Content Authenticity Initiative allows for greater transparency over the creation and editing of online content. By allowing viewers to inspect the credentials attached to the work – that is, the content producer, any edits that have been made, the original images used in any composite works, and the date and signing of the work – they’re better informed about what to trust.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="adobestock_186951735_1704898445312" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 4500/2592; max-width: 4500px;"></smartframe-embed></p>
<p>These initiatives follow others developed over the past few years for advertisers, whose concerns around transparency are more to do with having sufficient oversight on the supply chain so that they can understand where their ads have been shown, the allocation of their budgets, and tools to counter ad fraud.</p>
<p>While these initiatives, which include the IAB’s <a href="https://www.iabuk.com/goldstandard" target="_blank" rel="noopener">Gold Standard</a> and <a href="https://smartframe.io/blog/ads-txt-what-it-is-and-why-you-need-it/" target="_blank" rel="noopener">ads.txt</a>, may appear distinct from things like BBC Verify and Content Credentials, they’re not wholly unrelated. Publishers committed to creating trustworthy environments are more likely to attract the right kind of audiences, which, in turn, are more valuable to advertisers.</p>
<h4>A new era?</h4>
<p>Perhaps it’s the fact that the emergence of these tools and systems coincides with Google’s long-awaited deprecation of third-party cookies in Chrome – which places more focus on exactly how online viewers are being targeted by ads and the data used to do so – that makes it seem like the following year will witness the start of a new chapter in online transparency.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="google_1704898856586" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 953/635; max-width: 953px;"></smartframe-embed></p>
<p>Or perhaps these are a natural and logical continuation of what has already come before. Many publishers already state their editorial policies and highlight corrections where necessary; detail their ownership and funding; mark native advertising clearly; show authorship and links to writers’ social media channels; and adhere to requirements around affiliate commissions when recommending products that can be purchased on a third-party site. </p>
<p><strong>Read more:</strong> <a href="https://smartframe.io/blog/premium-publisher-platforms-what-are-they-why-do-they-matter/" target="_blank" rel="noopener">Premium publishers – what they are and the difference they make</a></p>
<p>Over the next few years, these practices are likely to be joined by policies that detail their responsible use of AI tools. Additionally, more obvious indicators of AI-generated content may become more prominent.</p>
<p>While some of these may be required to comply with the relevant regulations, publishers that voluntarily take additional steps to demonstrate their trustworthiness are more likely to hold themselves to a higher standard for their audiences and partners, and so will fare better under scrutiny. Given that the average person will only realistically take their news from a limited number of sources, steps like these may well determine who the publishers of tomorrow actually are.</p>								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/is-a-new-age-of-transparency-on-the-horizon/">Is a new age of transparency on the horizon?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tech predictions 2024: What we expect</title>
		<link>https://smartframe.io/blog/tech-predictions-2024-what-should-we-expect/</link>
		
		<dc:creator><![CDATA[Liam Machin]]></dc:creator>
		<pubDate>Tue, 09 Jan 2024 09:26:47 +0000</pubDate>
				<category><![CDATA[News & Features]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[misinformation]]></category>
		<category><![CDATA[regulations]]></category>
		<category><![CDATA[search]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=82757</guid>

					<description><![CDATA[<p>The next 12 months should usher in several significant changes across the [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/tech-predictions-2024-what-should-we-expect/">Tech predictions 2024: What we expect</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="82757" class="elementor elementor-82757" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-69fc95a3 e-flex e-con-boxed e-con e-parent" data-id="69fc95a3" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-1bbfe63d elementor-widget elementor-widget-text-editor" data-id="1bbfe63d" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">The next 12 months should usher in several significant changes across the tech world. So what should we expect?</p>
<p>As we move into 2024, the tech world seems to be at something of a crossroads.</p>
<p>With an expected <a href="https://www.voanews.com/a/global-election-year-ahead-lays-bare-strife-between-east-and-west/7431544.html" target="_blank" rel="noopener">60+ national elections on the horizon</a>, including major contests in some of the world&#8217;s most powerful nations, the tech industry will play a crucial role in shaping responsible public discourse.</p>
<p>From <a href="https://developers.google.com/privacy-sandbox/overview" target="_blank" rel="noopener">Google&#8217;s Privacy Sandbox</a> to the still-much-anticipated explosion of metaverse platforms and experiences, there&#8217;s plenty that could end up shaping 2024.</p>
<p>Furthermore, McKinsey estimates that <a href="https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy" target="_blank" rel="noopener">AI will contribute $13 trillion to the global economy by 2030</a> as it spreads across different industries.</p>
<p>Here are just a few areas that we think you should keep tabs on this year.</p>
<h4>Regulatory crackdown on Big Tech</h4>
<p>Regulators around the world are battling to regain control over industry giants Alphabet (Google), Amazon, Meta, Microsoft, and Apple.</p>
<p>There is a growing belief that increased regulation of the biggest tech players will create a more competitive, innovative, and ethical environment, one that prioritizes individual rights and consumer protection.</p>
<p>As these debates continue, it&#8217;s important to strike a balance between effective regulation and stifling innovation.</p>
<p>The ultimate goal <em>should</em> be to create a regulatory framework that promotes a fairer tech ecosystem that benefits consumers and businesses alike. However, this is easier said than done.</p>
<p>With several global elections and <a href="https://pro.morningconsult.com/analysis/public-opinion-antitrust-big-tech" target="_blank" rel="noopener">major lawsuits popping up around the world</a>, it may well be a challenging year for the most prominent names in the tech world.</p>
<h4>Artificial Intelligence (AI) deepening its integration into Western society</h4>
<p>The last few years have seen AI dramatically streamline workflows, and we&#8217;re now at the stage where the technology is starting to free up our time at work.</p>
<div class="youtube-container"><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/cgEVwDUfCho?si=84-S9RLtkDe7Mun4" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p><a href="https://builtin.com/artificial-intelligence/ai-replacing-jobs-creating-jobs" target="_blank" rel="noopener">Although this incredible growth does come with some potential risks</a>, AI’s ability to remove arduous and boring tasks from our lists is surely something to look forward to, allowing people to focus on more creative and strategic endeavors.</p>
<p>ChatGPT is perhaps the most prominent example. Launched in November 2022, it reached 100 million users within just 60 days of its launch and is now said to be used by <a href="https://nerdynav.com/chatgpt-statistics/" target="_blank" rel="noopener">92% of Fortune 500 companies</a>.</p>
<p>In the creative world, the boundary between human and machine creativity continues to blur, from the advanced editing features of Google Pixel 8 – dubbed <a href="https://blog.google/products/photos/google-photos-magic-editor-pixel-io-2023/" target="_blank" rel="noopener">Magic Editor</a> – to professional-grade software that leverages AI to transform images and videos and other media.</p>
<p>But while AI-powered tools have put artistic power into everyone&#8217;s hands – regardless of skill level –  <a href="https://smartframe.io/blog/copyright-ownership-ai-generated-art/" target="_blank" rel="noopener">copyright issues</a> mean that regulations will continue to evolve to ensure the responsible use and application of these tools.</p>
<h4>Demand for true human authorship</h4>
<p>While many people accept AI&#8217;s potential to improve our lives, there&#8217;s an inevitable downside to contend with too.</p>
<p>2023 saw a dramatic increase in &#8220;dud&#8221; websites, also known as <a href="https://smartframe.io/blog/newsguard-brands-wasting-money-programmatic-advertising-on-ai-generated/">unreliable artificial intelligence-generated news and information websites</a> (UAINs), filled with generic text with no value or depth.</p>
<p>In response, a trend emerged with a renewed appreciation for human connection and authenticity. Results from one recent survey that reflected this shift showed that 72% of people stated they <a href="https://the-media-leader.com/ai-content-can-u-spot-it/#:~:text=Nearly%20three%2Dquarters%20(72%25),in%20human%20creativity%20and%20judgement." target="_blank" rel="noopener">preferred to read content written by a human</a>.</p>
<p>The recent growth of <a href="https://nogood.io/2023/09/18/micro-influencers/" target="_blank" rel="noopener">micro-influencers</a> and the power of word-of-mouth marketing reflect a genuine desire for real-world recommendations and relatable voices.</p>
<p>Will this cause a shift in demand for keywords and SEO? Opinions are divided, but platforms such as <a href="https://www.businessofapps.com/data/linkedin-statistics/" target="_blank" rel="noopener">LinkedIn have seen explosive growth</a> in recent years, fueled by a hunger for genuine expertise instead of algorithmic curation.</p>
<h4>Staying informed about misinformation</h4>
<p>Discussions around <a href="https://smartframe.io/blog/imaging-and-ai-the-fascinating-ways-in-which-the-biggest-brands-are-using-artificial-intelligence-today/" target="_blank" rel="noopener">image manipulation</a>, privacy, and AI bias within photography are poised to take center stage in 2024.</p>
<p>Last year saw multiple stories fabricated from fake content, including a <a href="https://www.aljazeera.com/news/2023/5/23/fake-pentagon-explosion-photo-goes-viral-how-to-spot-an-ai-image" target="_blank" rel="noopener">reported explosion at the Pentagon</a>, Such stories can easily – and quickly – lead to devastating consequences.</p>
<p>With AI-powered editing tools, such as Photoshop&#8217;s Generative Fill, widely available, the line between reality and fabrication continues to blur.</p>
<p><script async src="https://static.smartframe.io/embed.js"></script></p>
<p>The most obvious answer to this is in initiatives such as the Adobe-led <a href="https://smartframe.io/blog/content-credentials-everything-you-need-to-know/" target="_blank" rel="noopener">Content Credentials</a>. Several of these continue to grow in stature, and this should help improve overall transparency about how an image has been changed. And by increasing awareness of online manipulation and misinformation, greater demand for more ethical practices should follow.</p>
<h4>More metaverse momentum</h4>
<p>Excitement for virtual worlds and experiences continues to build in certain circles but the core challenges for the metaverse to expand are still in place. Will 2024 be the year it finally makes a real breakthrough?</p>
<p>Key hurdles, such as accessibility, user behavior, and privacy concerns, are undoubtedly the main reasons why widespread adoption has not yet properly taken place – much like the internet in its early years.</p>
<p>And yet, despite the arguments against it, <a href="https://www.rollingstone.com/culture/culture-features/mark-zuckerberg-meta-ai-metaverse-1234950139/" target="_blank" rel="noopener">and some already claiming Meta&#8217;s iteration has died a quiet death</a>, many see huge potential in metaverse&#8217;s market potential.</p>
<p>Bloomberg predicts its value could reach <a href="https://technologymagazine.com/articles/metaverse-may-reach-615bn-by-2030-bloomberg-report-says" target="_blank" rel="noopener">$615bn by 2030</a>, while McKinsey shoots higher by suggesting it could reach a lofty <a href="https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/value-creation-in-the-metaverse">$5tn the same time frame</a>.</p>
<p>With investments pouring in from the likes of Meta, Microsoft, and Epic Games, it does feel like we&#8217;re at a potential tipping point.</p>
<p>Even the World Economic Forum (WEF) is introducing &#8220;metaverse sessions&#8221; in a bid to democratize access to its events – especially for young adults and entrepreneurs.</p>
<div class="youtube-container"><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/bu0CO8qT53E?si=7UMotxJSofNW3gbG" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<h4>Changes in online search</h4>
<p>Following on from the metaverse, many people have already pointed out that the rise of <a href="https://www.statista.com/statistics/1341355/sources-of-info-for-product-research-among-gen-z/" target="_blank" rel="noopener">alternative sources of product information</a>, such as TikTok and the Internet of Things (IoT), could be a sign of the end of the default &#8220;I&#8217;ll Google it&#8221; mentality.</p>
<p>Already, almost <a href="https://techcrunch.com/2022/07/12/google-exec-suggests-instagram-and-tiktok-are-eating-into-googles-core-products-search-and-maps/" target="_blank" rel="noopener">40% of Gen Z is using TikTok and Instagram</a>, and with the ongoing <a href="https://www.theguardian.com/technology/2023/dec/29/google-lawsuit-settlement-incognito-mode" target="_blank" rel="noopener">antitrust scrutiny against Google</a>, this shift could lead to a more diverse search landscape.</p>
<p>However, one of the most notable developments in this space is <a href="https://blog.google/products/search/generative-ai-search/" target="_blank" rel="noopener">Google&#8217;s supercharged Search Generative Experience</a> (SGE). This AI-powered feature holds the potential to deliver answers directly on the search page, potentially bypassing traditional websites.</p>
<p>This may pose some challenges for publishers and content creators, but it also has the potential to democratize access to information, making it easier for users to find what they need.</p>
<h4>No cookies, no party</h4>
<p>Google&#8217;s Privacy Sandbox hopes to create a better balance of ad targeting with user privacy, replacing the use of third-party cookies with new tools such as <a href="https://www.exchangewire.com/blog/2024/01/11/privacy-sandbox-how-is-2024-looking/" target="_blank" rel="noopener">CHIPS and Topics API</a>.</p>
<p>It&#8217;s been on the cards for a while and organizations should see it as a necessary shift towards less intrusive measures such as contextual advertising and better first-party-data strategies.</p>
<p>The effectiveness of this change is still up for debate and will no doubt continue across the year, with continuing concerns about Google&#8217;s potential power grab and <a href="https://www.wired.com/story/google-consent-decree-ftc-broken-privacy-protections/" target="_blank" rel="noopener">user data collection</a>.</p>
<p>However, this shift could also ignite an era of more creative advertising campaigns that are built on quality content and authentic engagement.</p>								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/tech-predictions-2024-what-should-we-expect/">Tech predictions 2024: What we expect</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Deepfake videos have us concerned, but are we overlooking a more sinister threat found within them?</title>
		<link>https://smartframe.io/blog/deepfake-videos-have-us-concerned-are-we-overlooking-another-threat/</link>
		
		<dc:creator><![CDATA[Matt Golowczynski]]></dc:creator>
		<pubDate>Fri, 30 Jul 2021 08:58:45 +0000</pubDate>
				<category><![CDATA[Image security]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[deepfake]]></category>
		<category><![CDATA[disinformation]]></category>
		<category><![CDATA[image security]]></category>
		<category><![CDATA[misinformation]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=69478</guid>

					<description><![CDATA[<p>Images and videos have long been edited to deceive, but the believability [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/deepfake-videos-have-us-concerned-are-we-overlooking-another-threat/">Deepfake videos have us concerned, but are we overlooking a more sinister threat found within them?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="69478" class="elementor elementor-69478" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-7e2e00ff e-flex e-con-boxed e-con e-parent" data-id="7e2e00ff" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-6365d8b1 elementor-widget elementor-widget-text-editor" data-id="6365d8b1" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">Images and videos have long been edited to deceive, but the believability of recent deepfake videos highlights another threat that could evolve into a much larger problem.</p>
<p>Manipulating images was commonplace long before photography entered the digital era. Whether it was to disguise wonky composition or to cut away something from the edge of the frame, photographers have long used the tricks of the darkroom to make us believe that an image was originally captured as it eventually appeared.</p>
<p>The editing process may be different today, but the tools that are used to carry this out have long been accessible to all. For all but complex editing, even a computer is now redundant as app-based editing and AI tools running on today’s powerful breed of smartphones and tablets achieve what would have been unthinkable ten years ago.</p>
<p>But while we’re used to the idea of online images not necessarily showing a scene or subject as it may have appeared in reality, it’s only in the past few years that video fakery has been so prominently discussed. </p>
<div class="youtube-container"><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/nwOywe7xLhs?si=GaYxnUBs0XOsXwMz" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>This has, of course, been spurred by the believability of many viral examples, such as recent videos that show what appears to be Tom Cruise playing golf and performing a magic trick. While undeniably impressive, other videos designed specifically to smear politicians and other public figures, and to undermine democratic processes, show just how easily these tools can be weaponized too.</p>
<h4>Proving provenance</h4>
<p>Clearly, some of these examples are intended to entertain rather than mislead us. But, collectively, they provide a backdrop for a handful of recent initiatives designed to provide greater clarity on the provenance of digital media.</p>
<p>Two of these are the <a href="https://smartframe.io/blog/content-authenticity-initiative-what-you-need-to-know/">Content Authenticity Initiative (CAI)</a> and <a href="https://www.originproject.info/" target="_blank" rel="noopener">Project Origin</a>The former was launched in 2018 by Adobe, Twitter, and The New York Times Company, with an initial mission of developing the industry standard for content attribution in order to help people determine what’s likely to be trustworthy. <</p>
<p>Project Origin, meanwhile, founded last year, <brought together the BBC, Canadian Broadcasting Corporation/Radio Canada, Microsoft, and The New York Times Company, with a more targeted focus on news organizations. The s<imilar aims of the two gave a logical basis for their collaboration on a <<a href="https://www.jointdevelopment.org/" target="_blank" rel="noopener">Joint Development Foundation<</a> project, named the <a href="https://c2pa.org/" target="_blank" rel="noopener">Coalition for Content Provenance and Authenticity (C2PA)</a>, which was formed through an alliance between Adobe, Arm, Intel, Microsoft, and Truepic.<</p>
<p>Having the participation of leaders in their field for these initiatives is imperative if they are to be widely adopted. But another obvious benefit is that it allows each organization to bring its own specific expertise to the party. <</p>
<p>This is important, as deception can come through many different types of media. And t<he CAI has not limited its scope here; its white paper <a href="https://contentauthenticity.org/blog/cai-achieves-milestone-white-paper-sets-the-standard-for-content-attribution" target="_blank" rel="noopener">makes it clear</a> that what it presents is “a set of standards that can be used to create and reveal attribution and history for images, documents, time-based media (video, audio) and streaming content.”<</p>
<div class="youtube-container"><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/Xd6vtHMlse4?si=JiJcsTcj7XWekiJa" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>We may immediately think of dubious news articles, doctored images and deepfake videos when we think about deceptive information online, but a <a href="https://www.bbc.co.uk/news/technology-57842514" target="_blank" rel="noopener">recent news story</a> highlighted one direction in which this deception could evolve. An interview with the director of an upcoming documentary about Anthony Bourdain revealed that t<he voice of the late chef and television presenter had been deepfaked with the help of AI, the purpose being to make viewers believe that Bourdain himself had narrated text from an email he had written. <The story was <a href="https://www.theguardian.com/food/2021/jul/16/anthony-bourdain-documentary-ai-voiceover-roadrunner" target="_blank" rel="noopener">met with an angry response</a> from many individuals, including those who knew Bourdain personally. <</p>
<p>This shouldn’t have come as much of a surprise, although it’s entirely possible we have already been subjected to this same manipulation without us realizing it. Indeed, it’s easy to consider such a situation that would draw few objections. <</p>
<p>A recorded voiceover in need of a few adjustments, for example, but without the possibility of the original actor re-recording this. It’s reasonable to assume that, were they alive, the actor in question may well consent to this if it meant a project could be finished. Would it receive a similar reaction, were people to find out this had happened? Or is it more a question of respect, given that Bourdain is no longer alive? Perhaps the output is what matters more: If this were a work of fiction rather than a documentary, would we care as much? <</p>
<div class="youtube-container"><iframe loading="lazy" title="YouTube video player" src="https://www.youtube.com/embed/GS0DQKHMpM8?si=PQCXgiJiu_krasDj" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>The use of AI in this way might be relatively new, but dubbing speech and using either archival footage or CGI when an actor is not available is commonplace. Even so, as an illustration of the rate of progress over the last 20 years, the clip above, taken from an episode of HBO’s <em>The Sopranos</em> that aired in 2001, demonstrates that what was once passable from a major television network is significantly behind what an individual without the same kind of budget can achieve today.<</p>
<h4>Finding your voice</h4>
<p>The backlash over the Bourdain documentary voiceover is ongoing, but perhaps we should have seen these kinds of problems coming. After all, it’s not just the fact that the subjects in these deepfake videos look like Tom Cruise, Mark Zuckerberg and others – it&#8217;s that they sound like them too. <</p>
<p>Quite how well you can spot this kind of trickery depends in part on exactly what’s being presented to you. Video and audio together can make deception easier to identify; speech may be slightly mismatched from mouth movements, for example, while blurring or unnatural facial expressions may also give things away. But if we’re listening to a voice alone, the absence of any visual quirks means we have a far greater chance of being fooled.<</p>
<p>In many of these deepfake videos, no harm is intended or caused. If we show these to a friend, we do so to impress them with what has been created, rather than to hurt them in any way. Furthermore, as Chris Ume, the creator of the Tom Cruise deepfakes <<a href="https://www.theverge.com/2021/3/5/22314980/tom-cruise-deepfake-tiktok-videos-ai-impersonator-chris-ume-miles-fisher" target="_blank" rel="noopener">has made clear<</a>, creating these involves considerable time and effort, so the chance of us all becoming targets in something similarly sophisticated remains small.<</p>
<p>But being deceived by a voice that belongs to someone you know suggests a much more sinister potential route for this technology. What if you were to receive a desperate call or voicemail from what sounded like a relative asking for help? Or for money? Or personal information of some sort? What if the same kind of trickery was used in a professional environment to authorize a financial transaction? Or to access an account of some sort that makes use of voice authentication?</p>
<p><script async src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="shutterstock_1781047973_1627565515926" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 5202/3468; max-width: 5202px;"></smartframe-embed><!-- https://smartframe.io/embedding-support --></p>
<p>How would this even work? The director of the Bourdain documentary <a href="https://www.gq.com/story/anthony-bourdain-morgan-neville-roadrunner-documentary" target="_blank" rel="noopener">has stated that it took over 10 hours of audio of Anthony Bourdain speaking</a>, which was fed into a machine-learning program to develop this synthetic voice. A quick online search shows there to be a slew of tools that promise you can synthesize your own voice in a similar manner, but, for the purposes of illicit activity, such tools could easily be abused to synthesize those belonging to others. </p>
<p>When you consider just how many people – even those who many would not consider to be particularly famous – have clocked up a similar amount of time to what was required from Bourdain speaking in high-quality, publicly accessible media (YouTube videos, podcasts, webinars and so on), it’s hard not to think of what the potential consequences of this could be.</p>
<p>Perhaps this sounds far-fetched. Perhaps the way these kinds of harms evolve means that things will take another route, resulting in a different kind of threat with a different set of safeguards. And perhaps these safeguards will ensure that as soon as these threats materialize the fallout will be minimal. Nevertheless, it’s not inconceivable that it may come to a point where we start to feel the need to guard our voice online like we guard our images and personal information today. </p>								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/deepfake-videos-have-us-concerned-are-we-overlooking-another-threat/">Deepfake videos have us concerned, but are we overlooking a more sinister threat found within them?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Online threats appear to be getting worse. But why?</title>
		<link>https://smartframe.io/blog/online-threats-appear-to-be-getting-worse-so-how-has-it-come-to-this/</link>
		
		<dc:creator><![CDATA[Matt Golowczynski]]></dc:creator>
		<pubDate>Wed, 11 Nov 2020 14:32:11 +0000</pubDate>
				<category><![CDATA[Image security]]></category>
		<category><![CDATA[News & Features]]></category>
		<category><![CDATA[brand safety]]></category>
		<category><![CDATA[disinformation]]></category>
		<category><![CDATA[image security]]></category>
		<category><![CDATA[misinformation]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=63584</guid>

					<description><![CDATA[<p>Images are behind many of today’s online threats, and big tech companies [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/online-threats-appear-to-be-getting-worse-so-how-has-it-come-to-this/">Online threats appear to be getting worse. But why?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="63584" class="elementor elementor-63584" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-17fd25b7 e-flex e-con-boxed e-con e-parent" data-id="17fd25b7" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-7d68a8a7 elementor-widget elementor-widget-text-editor" data-id="7d68a8a7" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">
  Images are behind many of today’s online threats, and big tech companies are struggling to defeat them. But their failures are often compounded by our own actions. So what are they, and we, doing wrong?
</p>

<p>
  SmartFrame <a href="https://smartframe.io/blog/press-release-smartframe-technologies-appointed-as-an-associate-member-of-online-tech-safety-body-ostia/">recently joined the online safety tech industry body OSTIA as an associate member</a>, and will be working with the body’s members to raise awareness of the tools available to help protect people online.
</p>

<p>
  OSTIA’s formation earlier this year follows a <a href="https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper" target="_blank" rel="noopener noreferrer">white paper</a> released by the UK government that focuses on online harms. This paper details the various threats that internet users currently face, and outlines a new statutory duty of care – overseen by communications regulator Ofcom – that’s aimed at curbing various illegal and harmful activities.
</p>

<p>
  Online harms exist in many different forms, and these change over time, so a comprehensive discussion of them all would be well outside of the scope of this article. But given that images are at the heart of many of these threats, a closer look at how these are captured, viewed, published, downloaded and shared with others helps us to gain an understanding of where problems may lie. It’s not simply a case of needing to protect ourselves and those around us from harmful content; we also need to recognize how our own perfectly lawful actions can also end up being problematic.
</p>

<p>
  Threats can, of course, exist in all corners of the internet. But it&#8217;s easy to see that many of the most problematic threats we face are in part down to the fact that social media platforms, together with other major online players, have become victims of their own success. As their audiences have ballooned, they’ve struggled to keep up with the volume (and nature) of content posted on their channels. There&#8217;s little doubt that they are a lot safer than they would otherwise be without certain tools and processes that are in place, and progress continues to be made as threats evolve. Nevertheless, their failure to address more basic issues is worrying, and a key factor that allows these threats to continue.
</p>

<p>
  So what are these threats, and what risks are we taking? Where are we being failed by those who we expect to protect us? And how do we become more responsible internet users?
</p>

<h4>The threats we face</h4>

<p>
  Internet safety is a critical issue, not least because the vast majority of us are, in some way or another, online. In the UK, <a href="https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper" target="_blank" rel="noopener noreferrer">nearly nine in ten adults and 99% of 12 to 15 year olds are online</a>. In the US, <a href="https://www.statista.com/topics/2237/internet-usage-in-the-united-states/" target="_blank" rel="noopener noreferrer">over 85% of the population has access to the internet</a>, which equates to around 281m people. Within these are countless vulnerable people, from children and the elderly through to those who may be more susceptible to radicalization.
</p>

<p>
  Online threats, of some description, have always been with us. Even those of us who have been using the internet since it was first commercially available may find it difficult to remember a time when spam folders designed to collect suspicious-looking emails weren’t integrated into email services as standard, or when new PCs and laptops weren’t being bundled with anti-virus software. Scams and viruses are still with us, but as the internet has evolved, so have the dangers.
</p>

<p>
  <script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_341144132_1603720011751" data-width="100%" data-max-width="4000px" data-theme="captions-article-1"></script>
</p>

<p>
  Many of today’s threats exploit the things we do every day. Our smartphones and the images we take with them; the social media platforms we use; and the cloud-based services in which we store images and other files. These have rooted themselves in our day-to-day lives to the extent that we use them without much thought. And when we use them without much thought, we start to lose sight of the value they could have to someone without the best intentions.
</p>

<p>
  Threats that concern images can typically be sorted into two categories, namely threats based on images we create ourselves and those based on images created elsewhere that are in some way harmful. As may be expected, the most serious image-based threats discussed in the white paper typically concern some type of pornography, such as images depicting child nudity and exploitation, as well as extreme porn and revenge porn. Other threats mentioned that often involve images include harassment and cyberbullying, as well as content that incites violence or hate crimes, or that promotes terrorism or other illegal activity.
</p>

<p>
  The paper also highlights the problem of perfectly legal content and platforms intended for adult audiences being accessed by children, from pornography to dating apps. The complexity in adopting effective age-verification systems for legal pornographic websites has meant that plans to introduce this in the UK <a href="https://www.theguardian.com/culture/2019/oct/16/uk-drops-plans-for-online-pornography-age-verification-system" target="_blank" rel="noopener noreferrer">were recently abandoned</a>. Similar systems are on the cusp of being introduced elsewhere (<a href="https://www.politico.eu/article/france-to-introduce-controversial-age-verification-system-for-adult-pornography-websites/" target="_blank" rel="noopener noreferrer">such as in France</a>), although technical problems and privacy concerns mean that it remains to be seen whether these will be successful.
</p>

<p>
  <script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_1716020299_1603451858031" data-width="100%" data-max-width="1423px" data-theme="captions-article-1"></script>
</p>

<p>
  Together with the growing issue of misinformation – which, most problematically, concerns deliberate acts of disinformation – one can start to appreciate just how multi-faceted the issue of online harm is, and how crucial images are to the effectiveness of many of these threats. But what’s made these such a significant issue today?
</p>

<h4>How the situation has escalated</h4>

<p>
  Various factors have come together to make this the current reality. One of the most significant of these is the ease with which images can be taken by the average person. Standalone cameras have long been affordable, but technology has moved on to the point where these no longer need to be bought separately, given the proliferation of cameras inside smartphones, tablets and computers.
</p>

<p>
  Image-editing software, which can be used to manipulate photos for all kinds of malicious purposes, is free, easily downloadable as an app, or even bundled into social media platforms. The overwhelming majority of picture taking and editing is done with no malice, of course, but, as we shall see, such images can also be at the heart of many harms.
</p>

<script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_515251183_1603714308456" data-width="100%" data-max-width="5760px" data-theme="captions-article-1"></script>

<p>Another factor is the increasing ease with which images can be shared with others. Social media platforms rely on user-generated content and activity as this allows them to understand their audience and sell advertising, while also helping to attract new users. So it’s no surprise that they have spent years making changes to algorithms, user interfaces, integrations, notifications and discoverability; sharing images is more pain-free than ever, but we rarely stop to consider who the real beneficiary of all these changes has been.</p>

<p>The third key factor is anonymity. While illegal and problematic images are also shared outside of social media platforms, the ability to participate in harmful activity while remaining anonymous has allowed individuals – and, more insidiously, groups posing as an individual – to leverage these platforms and target existing users with propaganda, or other harmful content. <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf" target="_blank" rel="noopener noreferrer">As a paper produced on behalf of Ofcom highlights</a>, one thing that allows this to happen is the fact that most online communications are asynchronous; when an individual cannot see a response to a message or another communication, they don&#8217;t see a negative emotional reaction to it, which can encourage them to act in a less inhibited manner. It&#8217;s easy to forget that social media platforms do not require any kind of proof of identity when opening an account, such as a copy of a passport or a driving license – an email address will usually suffice.</p>

<p>Encrypted messaging apps that allow for individuals to securely send and receive images and videos have also risen in prominence in the past few years, their popularity being a logical consequence of years’ worth of focus on online privacy issues. These can be vital to journalists and others who may have a legitimate need for sensitive communication but, as we might have expected, their security has been exploited by criminals. Telegram in particular has been frequently cited in reports on the most prominent terrorist attacks in recent years.</p>

<h4>It doesn’t concern me – does it?</h4>

<p>Publishing extreme and obviously illegal content is one thing, but the overwhelming majority of online users do nothing of the sort, and comply with both the law and the terms of the platforms they use. Nevertheless, many everyday images that appear completely innocuous can be used for harm, whether the image owner is the intended victim or not.</p>

<p>Social media channels are an obvious place to discover and steal personal images. While threats exist across different platforms, Facebook appears to attract the most criticism, and there are many reasons for this. In November 2021 it was the <a href="https://www.statista.com/statistics/1201880/most-visited-websites-worldwide/" target="_blank" rel="noopener noreferrer">third most visited website globally</a>, and the <a href="https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/" target="_blank" rel="noopener noreferrer">most popular social media platform</a> in terms of monthly average users. The fact that people connect with their friends and family here means that the nature of the content shared on it is more personal than on platforms such as Twitter and YouTube. But it&#8217;s the diversity of the platform that makes it particularly unique.</p>

<p>At a basic level, there&#8217;s the personal information and images that are openly shared with connections (or publicly), which can be used to harm the individual. On top of this, Messenger allows for threats to take place privately, while Groups <a href="https://www.wired.com/story/facebook-groups-are-destroying-america/" target="_blank" rel="noopener noreferrer">allow harmful ideas to reach new audiences when made public</a>, <a href="https://www.bbc.co.uk/news/blogs-trending-49902321" target="_blank" rel="noopener noreferrer">and to reverberate when set to private</a>. <a href="https://www.cnet.com/how-to/how-to-protect-yourself-when-using-facebook-marketplace/" target="_blank" rel="noopener noreferrer">Facebook&#8217;s Marketplace can attract scams</a> and other threats with some kind of financial goal, while issues with monitoring the live-streaming Live service <a href="https://uk.reuters.com/article/facebook-extremists/facebook-restricts-live-feature-citing-new-zealand-shooting-idUSL5N22R05J" target="_blank" rel="noopener noreferrer">became clear</a> after last year&#8217;s mass shooting in Christchurch, New Zealand. Other social media networks may have one or more of these elements, but it&#8217;s the fact that Facebook offers everything under one roof that forces it to contend with a diverse range of criminal behavior.</p>

<p>There are many reasons why someone may wish to steal images from someone’s account. A common scam on Facebook, for example, sees a duplicate profile of an individual created by an impersonator, who then sends friend requests to that user’s friends under the pretense of this being a new, but genuine, account. Once their request is accepted, the impersonator has the same level of access to the acceptor’s account as other friends of the original user, which may include access to images, friend lists and personal profile information. They also have the ability to message that person’s friends and family in an attempt to extract personal information.</p>

<p><script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_1555297913_1603718449169" data-width="100%" data-max-width="7692px" data-theme="captions-article-1"></script></p>

<p>Similar impersonations on Twitter see prominent users having their identity cloned in order to send spam or links to malware, or to promote counterfeit goods, with the target audience being followers of the genuine account. Even just a profile image may be enough to deceive other users (and at the time of writing, this image is available to view and download from accounts that have been set to private, or that have blocked malicious accounts from interacting with them or reading their tweets).</p>

<p>Tools for reporting such attacks when they’re noticed have been a part of social media platforms for some time, but a recent news article details a more sinister type of threat that can&#8217;t be detected in the same way. Celebrities have traditionally been the subject of fake pornographic images, and more recent deepfake videos, but <a href="https://www.bbc.co.uk/news/technology-54584127" target="_blank" rel="noopener noreferrer">a recent story</a> confirms that such a threat is no longer confined to those in the public eye. The story details how, since July 2019, over 100,000 images of women were harvested from social media sites and treated with AI tools to create fake nude images, before these were posted on Telegram. While the effectiveness of this technology in creating convincing images has been questioned, this technology will only improve in the future – and the fact that these are being published in encrypted channels means the likelihood of the victim ever discovering them are small.</p>

<p>These are just three examples of image-based threats that exist today, and key to them all is the theft of images from a social media profile. Their effectiveness depends on it. The platforms from which these images are taken decide how easily such images can be stolen, which also means they play a role in determining how problematic these threats can become.</p>

<p>Sharing images of ourselves is one thing, but things can become more problematic when we choose to share images of others. Social media being what it is, it’s difficult not to do this when we’re keeping friends and family updated on our lives, but the more that’s shared, the better we need to understand the security measures available to us.</p>

<p>In 2015, Australia’s Children’s eSafety Commissioner <a href="https://www.smh.com.au/national/millions-of-social-media-photos-found-on-child-exploitation-sharing-sites-20150929-gjxe55.html" target="_blank" rel="noopener noreferrer">revealed</a> that of the 45m images discovered on a single child exploitation site, around half appeared to be innocent images that were sourced from social media platforms such as Kik, Facebook and Instagram. Even if there was nothing inappropriate about the images themselves, they were said to have been frequently accompanied by comments that sexualized the subjects within them, and categorized into folders by the subjects’ appearance, age or another attribute.</p>

<p><script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_695029054_1603463469378" data-width="100%" data-max-width="4523px" data-theme="captions-article-1"></script></p>

<p>Family blogs maintained by parents without sufficient knowledge of online safety matters were also highlighted as a potential problem for the same reason, and the issue of digital literacy becomes more important as the age of those charged with supervising children rises. Poor knowledge of online risks among <a href="https://www.nzherald.co.nz/lifestyle/how-grandparents-low-digital-literacy-could-be-harming-your-kids/BVN3GQ5O3Q62FBJSFD4IJCIWCI/" target="_blank" rel="noopener noreferrer">elderly internet users</a> creates enough problems of its own for that demographic, but it presents additional dangers when these users have children in their care, as the usual restrictions that may be in place at home on smartphones, tablets and computers may not be enabled on devices belonging to these users.</p>

<p>One would think that as these platforms have grown, and have greater resources for tackling illegal content, the dangers would be minimized. But if we go by volume, it only appears to be getting worse. In 2014, there were just over 1m reports globally of images concerning child exploitation. Last year, the New York Times <a href="https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html" target="_blank" rel="noopener noreferrer">reported</a> that there had been 18.4m reports worldwide concerning indecent images and videos of children online – double the amount of cases in the previous year. The article also mentions that those familiar with the reports claimed that 12m of these cases concerned Facebook’s Messenger platform. <a href="https://transparency.facebook.com/community-standards-enforcement#child-nudity-and-sexual-exploitation" target="_blank" rel="noopener noreferrer">Facebook’s own reported data</a> shows it had acted on a total of 37.4m individual pieces of content that concerned child nudity or sexual exploitation, and the figures from the first two quarters of 2020 show a marked increase.</p>

<h4>The dark reality of keeping us safe</h4>

<p>Between the law, social platforms’ own terms of use, and common sense, the average user shouldn’t have too many problems understanding what kind of content can and cannot be shared online. When it comes to stopping the spread of problematic content, progress has no doubt been made over the years, partly in response to threats that have grown in prominence and partly in response to pressure from lawmakers (particularly over the last few years, as the dangers of disinformation and political interference have become more prevalent).</p>

<p>But in some cases, the specific content within an image would land it in something of a grey area, and a decision would be more down to a judgement call than anything else. Today, such decisions are carried out on social media platforms by a mixture of artificial intelligence and human moderators. The former may still be in its infancy, but is said to be adept at tackling nudity, to the tune of being able to <a href="https://www.theguardian.com/technology/2020/jun/17/not-just-nipples-how-facebooks-ai-struggles-to-detect-misinformation" target="_blank" rel="noopener noreferrer">correctly identify and automatically remove 99.2% of offending images</a>. Nevertheless, some level of human moderation is still required for other types of problematic content – and as this has become more of an issue, reports of the effects of this content on moderators have made for disturbing reading.</p>

<p>Last year, <a href="https://www.theguardian.com/technology/2019/sep/17/revealed-catastrophic-effects-working-facebook-moderator" target="_blank" rel="noopener noreferrer">The Guardian reported</a> that contractors tasked with moderating content on Facebook claimed to have witnessed colleagues becoming addicted to extreme graphic content and hoarding it for themselves (a claim Facebook denies), as well as being influenced by the hateful, far-right material they were supposed to be vetting.</p>

<p><script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_770840635_1605024326573" data-width="100%" data-max-width="4500px" data-theme="captions-article-1"></script></p>

<p>Earlier this year, <a href="https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health" target="_blank" rel="noopener noreferrer">Facebook paid $52m to moderators</a> who had claimed their work has led them to develop mental health issues, with some claiming to have experienced symptoms of post-traumatic stress disorder (PTSD). <a href="https://www.bbc.com/news/technology-51245616" target="_blank" rel="noopener noreferrer">As the BBC reported at the time</a>, one contractor who recruits such moderators had started to ask workers to sign a form acknowledging that they understood this work could lead to PTSD.</p>

<p>This issue is not new; Facebook has <a href="https://www.bbc.co.uk/news/technology-45639447" target="_blank" rel="noopener noreferrer">previously been sued</a> for similar reasons. Nor is it specific to Facebook; Microsoft <a href="https://thenextweb.com/microsoft/2017/01/12/microsoft-sued-by-employees-who-developed-ptsd-after-reviewing-disturbing-content/" target="_blank" rel="noopener noreferrer">faced a similar lawsuit</a> back in 2017, while YouTube is <a href="https://www.scribd.com/document/476939218/Moderator-complaint-against-YouTube#from_embed" target="_blank" rel="noopener noreferrer">currently being sued</a> by a former moderator who claims that it had failed to “provide a safe workplace for the thousands of contractors that scrub YouTube’s platform of disturbing content.”</p>

<p>Confidentiality agreements may explain why we haven&#8217;t heard more about this issue. During a <a href="https://twitter.com/FBoversight/status/1320758578886578179" target="_blank" rel="noopener noreferrer">Periscope stream</a> with The Real Facebook Oversight Board, one ex-moderator who moderated content on behalf of Facebook explained: &#8220;I think the biggest problem is NDAs, which can be held over your head &#8230; which can make it difficult to speak out about anything.&#8221; This raises an intriguing question: without reports of these lawsuits, would we know anything about this at all?</p>

<p>Clearly, a balance needs to be struck between human and AI moderation so that the general public is sufficiently protected without it creating such severe issues for a handful of individuals. But what are the chances of AI being able to take over completely?</p>

<p>Facebook, along with Twitter and YouTube, appears to have <a href="https://www.washingtonpost.com/technology/2020/03/23/facebook-moderators-coronavirus/" target="_blank" rel="noopener noreferrer">relied more on AI during the ongoing pandemic</a>. <a href="https://www.bbc.co.uk/news/technology-45639447" target="_blank" rel="noopener noreferrer">It has previously stated</a>, in response to another lawsuit, that it wanted to move towards this model. This was two years ago, and the fact that human moderators are still being used suggests that either the technology isn’t quite there yet or that the threats are changing too rapidly (or, more likely, a combination of the two). Furthermore, while a deeper shift towards AI sounds like a positive solution for moderators, concerns remain. &#8220;My guess is that as AI gets better at recognizing patterns in stuff that&#8217;s constantly posted &#8230; we&#8217;ll just get more of the extreme borderline content and have to make harder decisions more often,&#8221; the ex-moderator stated.</p>

<p>Another ex-moderator on the same live-stream agreed. &#8220;At the time [AI] didn&#8217;t seem like the best indicator as to what was violating or not. It felt like job security for sure &#8230; As soon as you recognize the pattern, the internet will change the pattern. It&#8217;ll work and get better, but there will always be borderline [content]. We&#8217;re good at that as people. The algorithm and bots are only going to do so much. You&#8217;ll still need a content moderator – always.&#8221;</p>

<p>Whatever ratio of AI and human moderation is used, the reality is that, right now, people who we have never met are watching some of the most disturbing content online to ensure it gets nowhere near us. These platforms may claim to support these individuals, but these lawsuits, together with the comments from ex-moderators, indicate that this is something they are still grappling with.</p>

<h4>How social platforms are making things worse</h4>

<p>Preventing images that shouldn’t exist from reaching us is one thing, but freely providing tools for downloading other people’s lawful images only compounds the problems online users face.</p>

<p>Anyone using Facebook will usually see a Download button next to images posted by friends and other accounts, and images that do not have this (because of privacy settings) can often still be stolen with conventional means. It’s not even necessary to be someone’s friend to have access to this control on their images. While security options can be customized, it&#8217;s possible to download images from profiles that are searched for, or when a connection shares someone else&#8217;s image. At the time of writing, default privacy settings give friends of friends the same kind of access to this content that friends have, and the privacy tour that new users are invited to take to better understand the settings available to them is entirely optional.</p>

<p>Even if no malice is intended by downloading such an image, the presence of such a control shows little regard for the protection of an image owner’s content or their copyright. While UK and US copyright laws both detail a handful of scenarios in which such images from others may be copied and used, the fact that there are only a handful of legitimate exemptions that fall into this category makes the provision of this button puzzling.</p>

<p>Such issues are problematic on other platforms too, albeit to a lesser extent. Images cannot be right-clicked or downloaded with a dedicated control from (Facebook-owned) Instagram, for example, although screenshots are possible and these images are easily accessible in the page’s source code. Twitter also doesn’t have a download button of any sort, but right-clicks, drag-and-drop saving and screenshots are all allowed.</p>

<p>Photo-hosting site Flickr has similarly problematic controls. At the time of writing, it displays a license type underneath images uploaded to its platform, and the default option is All Rights Reserved. This, <a href="https://www.flickr.com/help/terms" target="_blank" rel="noopener noreferrer">as it explains</a>, means that:</p>

<p><i>“You, the copyright holder, reserve all rights provided by copyright law, such as the right to make copies, distribute your work, perform your work, license, or otherwise exploit your work; no rights are waived under this license.”</i></p>

<p>And yet, just to the side of this license is a &#8220;download this photo&#8221; button. Not only that, but clicking on this presents a range of image sizes to choose from.</p>

<p>To its credit, Flickr does support a range of other licensing options and allows users to prevent direct downloads in the manner described above. A ban on right-clicks and drag-and-drop actions do not allow the image to be saved from this page, but protection over screenshots is absent and the image is still available in the page’s source code. Even if users do choose the more secure option, finding a page with an image in a range of sizes is straightforward enough. Anyone intent on stealing such an image can do so without much effort.</p>

<h4>Mobile challenges</h4>

<p>Images are, of course, stolen from other channels outside of social media sites, and this problem extends to the mobile devices we use.</p>

<p>Some of the most common browsers used on smartphones and tablets, from Google Chrome and Microsoft Edge through to Firefox and Brave, have a context menu that appears upon a long press on an image, one that looks much like a right-click popup on a computer. While these menus differ slightly from one another, the option to download the image directly – or at least open it in a new tab where it’s isolated from other page content – is typically provided among these options. What are the chances that someone is using this to download their own image versus the chance that it’s being used to download someone else’s image without their permission or knowledge?</p>

<p>Even if browsers prohibit these kinds of actions, the option to screenshot images on mobile devices will typically be available. As users of Snapchat, and certain banking and payment apps, may already know, screenshot detection and/or blocking has been around for some time, although it can be circumvented and is lacking from many of the apps that would particularly benefit from it.</p>

<p>Dating apps are an obvious example. While the threats associated with these have traditionally centered on the physical dangers of meeting strangers in person, image theft brings with it additional problems that don’t necessarily rely on any physical contact.</p>

<p>These include catfishing, which typically sees a new profile set up with stolen images in a bid to gain user’s trust and divulge personal or sensitive information (which obviously includes images). Earlier this year, it was reported that over <a href="https://gizmodo.com/70-000-tinder-photos-of-women-just-got-dumped-on-a-cybe-1841043456" target="_blank" rel="noopener noreferrer">70,000 images were scraped from Tinder</a> and found on cyber-crime forum, for unknown reasons. This came less than three years after a user claimed to have exploited <a href="https://techcrunch.com/2017/04/28/someone-scraped-40000-tinder-selfies-to-make-a-facial-dataset-for-ai-experiments/" target="_blank" rel="noopener noreferrer">Tinder’s API to scrape 40,000 images</a> in order to create a facial dataset. Incidentally, these figures come nowhere near the 3bn or so images that were said to have been scraped by a start-up <a href="https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html" target="_blank" rel="noopener noreferrer">from Facebook, YouTube, Venmo and other sites for the same reason</a>.</p>

<p>These are extreme examples, but the ability to scrape this quantity of images in one go affects enough people for isolated incidents to be significant issues. More everyday image theft will typically be carried out by conventional image saving or screenshots, and at the time of writing, the majority of popular dating apps, such as Hinge, Tinder, Bumble, Plenty of Fish and OkCupid do not notify users when someone has taken a screenshot of their image. One notable exception, however, is Grindr, which recently introduced this as an optional setting.</p>

<h4>Taking responsibility</h4>

<p>Any kind of solution to these issues must balance security with practicality. Suggesting that people simply stop posting images online, or never use dating apps, is not the answer. But drawing their attention to the fact that they can never be completely sure that an image posted online or through an app only exists where it was originally published may be a sobering enough thought for them to reconsider what they share, and how they share it, in the first place.</p>

<p>So, unless images are published <a href="https://smartframe.io/image-security/">in a way that provides robust protection against being downloaded</a>, the questions to be asked are: What is being posted? Where is it being posted and how? How easily can it be seen by others and stolen? And does the person who is publishing this content understand the possible risks in doing so?</p>

<p>While many would be reluctant to give up using social media platforms they&#8217;ve become accustomed to, at the very minimum, it’s a good idea to review the current safety tools on offer. These change over time, and it’s easy to overlook new features that may protect accounts and their users as they are introduced, so it’s worth checking current documentation to understand how secure your social media accounts actually are.</p>

<p>Running through friend lists for any duplicate contacts, or others that in some way don’t look right, is also a good idea; you may well trust the hundreds of connections you have on a social media account, but it only takes one account being compromised for problems to start. Noticing spam being posted by a friend suggests their account has been subject to such an attack, which means it might be best to report the activity and unfriend them, and to notify them of this through a different channel.</p>

<p>It may also be worth checking older inactive social media accounts that may still host images. If you’re particularly concerned about a specific image and its availability online, you may wish to perform a reverse search for it through Google Images to see if it can be found somewhere online.</p>

<p>Changing passwords on a regular basis is also a good idea, as is using different passwords across different accounts and enabling multi-factor authentication where possible. The password-saving options within many browsers can be used to save longer and more complex passwords, although third-party tools are also available for this.</p>

<p>If you have a tendency to use the same passwords across multiple sites, or have done so in the past, it’s worth investigating whether your password details may have been leaked at some point. <a href="https://haveibeenpwned.com/">Have I Been Pwned</a> allows you to check whether your email address and other personal information may have got out in a historic data breach.</p>

<h4>Final thoughts</h4>

<p>Combating the threats outlined above is a significant and complex challenge. They differ from each other with respect to their targets, the nature of their operation, and the intended consequences. The fact that many of these may be considered to be harmful without quite straying into illegality makes them all the more difficult to police. But when we consider just how damaging these can be, and how many people they stand to affect, few would argue that the protections we have right now are sufficient.</p>

<p>No single solution will prevent or stop every threat, and new measures aimed at curbing these often raise questions that aren’t easy to answer. To what degree should we expect age verification systems to work in practice? How can developers of encrypted messaging apps provide their services while complying with the authorities? How do we define hate? And who gets to define it?</p>

<p>Success will depend on a number of factors. Educating online audiences – particularly younger users – on threats and making sure that everyone understands the tools available to combat them is important. Effective enforcement of codes of conduct from regulators will also be key. The ongoing development of a <a href="https://smartframe.io/blog/content-authenticity-initiative-what-you-need-to-know/">new content attribution model</a>, which promises to deliver greater transparency over online content courtesy of the <a href="https://contentauthenticity.org/" target="_blank" rel="noopener noreferrer">Content Authenticity Initiative</a>, is also very encouraging.</p>

<p>It&#8217;s also conceivable that, as AI improves, smartphone manufacturers may be required to use these tools to detect images that are likely to be problematic as soon as they are captured. Such a move would no doubt be difficult, and would be met with plenty of opposition and privacy concerns, but when you consider the accuracy with which AI tools are currently able to detect nudity, it&#8217;s easy to see things moving in this direction.</p>

<p>But unless the ease with which images can travel online in the way they currently do is addressed, many threats will remain. Prevention, as the maxim goes, is better than the cure, and too little has been done to stop image theft in the thirty or so years since the internet has been commercially available. Now, we’re seeing the consequences.</p>
								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/online-threats-appear-to-be-getting-worse-so-how-has-it-come-to-this/">Online threats appear to be getting worse. But why?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
