<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>brand safety Archives - SmartFrame</title>
	<atom:link href="https://smartframe.io/blog/tag/brand-safety/feed/" rel="self" type="application/rss+xml" />
	<link>https://smartframe.io/blog/tag/brand-safety/</link>
	<description>Ideal Presentation, Robust Protection and Easy Monetization</description>
	<lastBuildDate>Wed, 16 Jul 2025 11:13:44 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>SmartFrame awarded TAG Brand Safety Certified seal</title>
		<link>https://smartframe.io/blog/smartframe-awarded-tag-brand-safety-certified-seal/</link>
		
		<dc:creator><![CDATA[Matt Golowczynski]]></dc:creator>
		<pubDate>Mon, 22 Jul 2024 09:26:05 +0000</pubDate>
				<category><![CDATA[News & Features]]></category>
		<category><![CDATA[brand safety]]></category>
		<category><![CDATA[smartframe]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=118127</guid>

					<description><![CDATA[<p>Brand safety certification from the world&#8217;s largest initiative of its kind follows [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/smartframe-awarded-tag-brand-safety-certified-seal/">SmartFrame awarded TAG Brand Safety Certified seal</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="118127" class="elementor elementor-118127" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-13e096d4 e-flex e-con-boxed e-con e-parent" data-id="13e096d4" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-2e305e7c elementor-widget elementor-widget-text-editor" data-id="2e305e7c" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">Brand safety certification from the world&#8217;s largest initiative of its kind follows independent audit<!-- https://smartframe.io/embedding-support --></p>
<p><span style="font-weight: 400;">Brand safety is a critical component of SmartFrame’s product and a key reason publishers and advertisers choose to do business with us.</span></p>
<p><span style="font-weight: 400;">So we’re delighted to announce that SmartFrame has been awarded the Brand Safety Certified (BSC) seal by the </span><a href="https://www.tagtoday.net/" target="_blank" rel="noopener"><span style="font-weight: 400;">Trustworthy Accountability Group (TAG)</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">Launched in 2020, TAG&#8217;s Brand Safety Certification program is the largest and broadest global program of its kind.</span></p>
<p><span style="font-weight: 400;">One of four digital certificates offered by TAG, the Brand Safety Certified seal is awarded to companies whose technologies protect against the misplacement of advertising on digital media.</span></p>
<p><span style="font-weight: 400;">The certification follows independent validation by a third-party auditor, and confirms that SmartFrame is proactively fighting fraudulent activity in digital advertising.</span></p>
<p>&#8220;Brands have the right to ensure their creatives are displayed in environments that align with their values,&#8221; says Rob Sewell, CEO of SmartFrame Technologies. &#8220;This certification validates the significant measures the company has implemented to make the SmartFrame platform trustworthy and resilient against bad actors. I&#8217;m delighted to see our efforts recognized by TAG.&#8221;</p>
<p><span style="font-weight: 400;">TAG was established in 2020 by the American Association of Advertising Agencies (4A&#8217;s), the Association of National Advertisers (ANA), and the Interactive Advertising Bureau (IAB), with a mission to combat criminal activity and protect brand safety in the digital ad supply chain.</span></p>
<p><span style="font-weight: 400;">Its Leadership Council includes several key players from the worlds of advertising, publishing, and tech, including Google, Disney, Meta, Dentsu, GroupM, and Omnicom.</span></p>								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/smartframe-awarded-tag-brand-safety-certified-seal/">SmartFrame awarded TAG Brand Safety Certified seal</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google’s potential policy violation – unraveling the YouTube Ads story</title>
		<link>https://smartframe.io/blog/google-youtube-ads-violation-policies/</link>
		
		<dc:creator><![CDATA[Liam Machin]]></dc:creator>
		<pubDate>Tue, 25 Jul 2023 07:00:52 +0000</pubDate>
				<category><![CDATA[News & Features]]></category>
		<category><![CDATA[brand safety]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[video]]></category>
		<category><![CDATA[youtube]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=81141</guid>

					<description><![CDATA[<p>Following Adalytics’ research into TrueView and skippable ad content on YouTube, now [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/google-youtube-ads-violation-policies/">Google’s potential policy violation – unraveling the YouTube Ads story</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="81141" class="elementor elementor-81141" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-6837c7b2 e-flex e-con-boxed e-con e-parent" data-id="6837c7b2" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-69134e9b elementor-widget elementor-widget-text-editor" data-id="69134e9b" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">Following Adalytics’ research into TrueView and skippable ad content on YouTube, now is an opportunity to reiterate the importance of increasing transparency across the entire ad industry.</p>

<p>Few things are as disheartening as someone going back on a promise. A pre-arranged meeting after a long time of not seeing a friend or housemate, or your partner not doing their dishes. Whatever it may be, you can be left feeling a little hard done by. </p>
<p>But when it comes to the cutthroat world of business – where contracts, piles of cash, and multinational behemoths reign supreme – the fallout tends to be a little more costly.</p>
<p>Cue more Big Tech controversy …</p>
<h4>What are Google and YouTube accused of?</h4>
<p>In short, Google and YouTube are accused of misleading brands about the impact of their ad campaigns as well as how (and where) their ads are displayed.</p>
<p><a href="https://adalytics.io/blog/invalid-google-video-partner-trueview-ads" target="_blank" rel="noopener">Adalytics claims</a> that Google, through its <a href="https://support.google.com/google-ads/answer/7166933?hl=en" target="_blank" rel="noopener">Video Partners program</a>, is placing ads in small, muted, automatically-played videos off to the side of a page’s main content, which goes against its own standards for monetization. </p>
<p>Perhaps most shockingly, it is claimed that this applies to around 80% of cases they analyzed in the study. </p>
<p>The company’s research, which includes “Fortune 500 brands, the US federal government, and many small businesses” in a data set compiled between 2020 and 2023, states that one out of every two ads is not even running on YouTube.How has Google responded?</p>
<p>Google assures advertisers that these ads will be displayed on reputable sites, alongside the main video content, and that payment will only be required for non-skippable ads. </p>
<p>And, according to the original news publisher <a href="https://www.wsj.com/articles/google-violated-its-standards-in-ad-deals-research-finds-3e24e041" target="_blank" rel="noopener">The Wall Street Journal</a>, a Google statement said that “many claims are inaccurate” and that they will take “any appropriate actions once the full report is shared.” </p>
<p>Google’s director of global video solutions, Marvin Renaud, also released a <a href="https://blog.google/products/ads-commerce/transparency-and-brand-safety-on-google-video-partners/" target="_blank" rel="noopener">blog post in response to the findings</a> where he says: “Brands care deeply about where their ads are placed and so do we.”</p>
<h4>Advertisers – do your homework!</h4>
<p>Advertisers have the most important role to play in ensuring the success and integrity of their ad campaigns. They are the ones investing the money and they are the ones who have specific objectives in mind.</p>
<p>While tech giants must uphold their promises and maintain transparency, advertisers must remain proactive in safeguarding their brand reputation and investments.</p>
<p>Instead of relying solely on the assurances of the platforms they use, advertisers should take a hands-on approach to monitor and assess the performance of their ads.</p>
<p>By taking proactive steps through actions like regular audits, utilizing third-party verification, and establishing clear communication channels, advertisers can help keep their brand&#8217;s reputation intact, leading to more effective and efficient advertising campaigns overall.</p>
<h4>The perfect trio: transparency, due diligence, and reliable partnerships</h4>
<p>While there has been some contention online about the dramatization of certain figures mentioned in the research, there’s still a clear argument that businesses should be wary of concentrating significant budgets into domains where there’s controversy – even if they are the biggest platforms.</p>
<blockquote class="twitter-tweet tw-align-center">
<p dir="ltr" lang="en">Just want to affirm that <a href="https://twitter.com/AnthonyHigman?ref_src=twsrc%5Etfw">@AnthonyHigman</a> is entirely correct here. Google has lots of issues, but the article is misleading. We’ve run literally millions of dollars of YT campaigns and I’ve never once seen a CPV of anything close to $100…more like $2 to $10.</p>
<p>— Gil Gildner (@gilgildner) <a href="https://twitter.com/gilgildner/status/1674092447545032710?ref_src=twsrc%5Etfw">June 28, 2023</a></p>
</blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<p>In fact, Dutch MEP <a href="https://www.adexchanger.com/data-driven-thinking/the-problem-with-independent-third-party-verification-on-youtube-is-that-it-aint/" target="_blank" rel="noopener">Paul Tang has reportedly written to Roberta Metsola</a>, president of the European Parliament, to argue the need to reallocate its ad budgets away from Alphabet (the parent company of both Google and YouTube).</p>
<p>Nevertheless, in today&#8217;s ever-changing digital advertising landscape, it’s crucial for businesses to prioritize brand safety, transparency, and diligent partnerships. Without these key elements, companies risk wasting significant amounts of money.</p>
<p>It is vital to work alongside ad tech companies that provide complete transparency regarding the services they offer, because, as highlighted in the research, even major platforms like Google and YouTube can fall short of meeting advertisers&#8217; expectations and standards.</p>
<p>To limit the potential risk, brands should look to prioritize brand-safe environments and develop relationships with other <a href="https://smartframe.io/blog/premium-publisher-platforms-what-are-they-why-do-they-matter/" target="_blank" rel="noopener">premium publishers</a> to ensure a higher level of assurance in terms of brand safety, quality content and see better audience engagement. </p>
<p>The ever-growing importance of transparency and due diligence can’t be ignored. But hey, things can only get better … right? </p>
<iframe title="YouTube video player" src="https://www.youtube.com/embed/7W3yz6abJkU" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe>								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/google-youtube-ads-violation-policies/">Google’s potential policy violation – unraveling the YouTube Ads story</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Brand safety: Everything you need to know about brand safety and suitability</title>
		<link>https://smartframe.io/blog/everything-you-need-to-know-about-brand-safety-brand-suitability/</link>
		
		<dc:creator><![CDATA[Liam Machin]]></dc:creator>
		<pubDate>Mon, 22 May 2023 09:21:24 +0000</pubDate>
				<category><![CDATA[In-image advertising]]></category>
		<category><![CDATA[advertising]]></category>
		<category><![CDATA[brand safety]]></category>
		<category><![CDATA[brand suitability]]></category>
		<category><![CDATA[images]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=80435</guid>

					<description><![CDATA[<p>This article will explore the concept of brand safety in advertising, why [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/everything-you-need-to-know-about-brand-safety-brand-suitability/">Brand safety: Everything you need to know about brand safety and suitability</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="80435" class="elementor elementor-80435" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-53bc2090 e-flex e-con-boxed e-con e-parent" data-id="53bc2090" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-5b0776d0 elementor-widget elementor-widget-text-editor" data-id="5b0776d0" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">This article will explore the concept of brand safety in advertising, why it&#8217;s becoming a more significant priority, and the challenges of creating a brand-safe campaign – even with the help of AI.</p>

<p>The digital environment has always been relatively complicated. But since the global pandemic thrust the prevalence of online misinformation and disinformation into the spotlight, the ambivalent nature of online content and the potential risks of online advertising have proved increasingly difficult to navigate for brands.</p>
<p>With this in mind, protecting the reputation of a brand&#8217;s online presence is becoming increasingly important. In this article, we&#8217;ll go over everything you need to know about the subject as well as the solutions available to minimize any potential risk. </p>
<h4>What is brand safety in advertising?</h4>
<p>Brand safety is a term that describes the process of regulating where and when ads appear to avoid negative associations with controversial content. This can include content that is violent, extremist, or that contains hate speech, among other things.</p>
<p>Businesses have a number of tools to help prevent their ads from appearing alongside content they do not want to be associated with.</p>
<p>Keyword blockers are one of these tools. However, blocking out keywords is no silver bullet and their overzealous nature can lead to ads not appearing alongside safe and legitimate content, which can limit the reach of an advertising campaign and reduce its effectiveness. Furthermore, these lists can quickly become outdated or overly populated.</p>
<h4>Why is brand safety becoming more of a priority?</h4>
<p>Brand safety is becoming more crucial as a result of the importance of having a widespread online presence and the potential risks that go along with it. </p>
<p>The growth of independent digital media outlets has given rise to a greater range of issues concerning brand safety, such as ad fraud, fake news, hate speech, and inappropriate content. Consumers are more conscious of the content they consume as a result, and brands face reputational and financial risks. </p>
<p>When a person forms an opinion of a brand, they rely on a range of explicit, external signals such as messaging, online presence, ads, recommendations, and reviews in order to make a judgment. </p>
<p>By developing a consistent tone of voice, a brand can convey both its values and the quality of its products and services. </p>
<p>Delivering great customer service on top of that further contributes to building a positive reputation – with word of mouth still remaining one of the <a href="https://www.lxahub.com/stories/word-of-mouth-marketing-stats-and-trends-for-2023#:~:text=88%25%20of%20consumers%20placed%20the,consumers%20trust%20brand%2Dsponsored%20content." target="_blank" rel="noopener">most trusted</a> forms of organic advertising.</p>
<p>While brand messaging is fairly straightforward to control, implicit signals, such as where a brand’s ads appear and what kind of content and websites it becomes associated with, can be harder to manage. </p>
<p>The industry is aware of it too. A recent survey from Mediaocean found that 40% of marketing leaders across different industries <a href="https://www.marketingtechnews.net/news/2023/jan/03/40-of-marketers-expect-increase-in-brand-safety-concerns/" target="_blank" rel="noopener">expect an increase in concerns around brand safety</a>.</p>
<p>However, <a href="https://iabeurope.eu/knowledge-hub/iab-europes-2023-brand-safety-poll/" target="_blank" rel="noopener">IAB Europe’s 2023 Brand Safety poll</a> revealed that more than half of industry professionals within the digital advertising space (53%) agreed that the industry has done a good job of tackling brand safety over the past 12 months – up from 36% in 2019.</p>
<p>Either way, brands do not want to risk alienating and losing customers by being linked to harmful content, and multiple studies have shown that brands that advertise in <a href="https://smartframe.io/blog/premium-publisher-platforms-what-are-they-why-do-they-matter/" rel="noopener">premium digital environments</a> receive additional legitimacy by extension. </p>
<p>As the aphorism goes: show me who your friends are and I will tell you who you are.</p>
<p><script async="" src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="shutterstock_2122970090_1681819660916" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 1.5 / 1; max-width: 6000px; --canvas-wedge-error-size: 6000;" lazy="" class="error md sff-error" tabindex="0"></smartframe-embed></p>
<h4>Why is brand suitability important?</h4>
<p>Brand suitability has emerged as a more tailored and individual approach to brand safety, one that takes specific brand needs, market research insights, context, and meaning into account when determining ideal advertising environments.</p>
<p>Traditionally, brand safety strategies have been very broad, involving techniques such as keyword blocking and URL blocklisting. </p>
<p>However, with the COVID-19 crisis in confidence alongside a never-ending torrent of online content, digital advertisers, agencies, and publishers have been looking for more control in their brand safety solutions. </p>
<p>Moreover, the volume and nature of online content – whether progressive or contentious – has intensified to the point where this kind of legacy protection often pits brand safety against scale and effectiveness.</p>
<p>For example, blanket exclusion lists might block news and entertainment sites for references to &#8220;violent&#8221; content, such as mentions of crime statistics or even scenes from a TV series, despite the website itself being a reputable and trustworthy source. </p>
<p>This caveat also predominantly impacts progressive and minority media. In 2022, for example, 65% of what tech firm Oracle terms progressive media content, which includes LGBTQ+ media, was <a href="https://www.adweek.com/programmatic/i-have-given-up-adverse-blocking-continues-to-burn-lgbtq-publishers/" target="_blank" rel="noopener">blocked by a standard exclusion list</a>.</p>
<p>Brand suitability goes one step further than brand safety by simply avoiding inappropriate content and purposefully targeting brand-building inventory and maximizing every audience interaction. </p>
<p>When brands align all customer-facing and advertising assets into a consistent and coherent narrative, it builds a positive framework in which customer expectations and customer experiences meet. </p>
<h4>What are the consequences of unsafe advertising?</h4>
<p>A 2018 study carried out by <a href="https://magnaglobal.com/wp-content/uploads/2018/10/The-Brand-Safety-Effect-CHEQ-Magna-IPG-Media-Lab-BMW-Logo-101018.pdf" target="_blank" rel="noopener">CHEQ, Magna, and IPG Media Lab</a> demonstrated how consumers’ views of a brand showed a stark decline across key metrics after unsafe ad placement, with a: </p>
<ul>
	<li>2.8x decrease in willingness to associate with the brand</li>
	<li>2x reduction in purchase intent</li>
	<li>7x loss in brand quality perceptions</li>
</ul>
<p>Later, in 2019, a separate study revealed consumers <a href="https://doubleverify.com/newsroom/study-consumers-reject-brands-that-advertise-on-fake-news-and-objectionable-content-online/" target="_blank" rel="noopener">generally reject brands that advertise on platforms that host objectionable content</a>, with two-thirds of those surveyed saying they would stop using a brand if its ads appeared next to fake or offensive content.</p>
<p>Such findings are consistent with a <a href="https://www.marketingcharts.com/advertising-trends-228036" target="_blank" rel="noopener">more recent survey conducted in 2022</a> in which 65% of respondents stated that they would likely hold unfavorable views of brands that advertise on privately-owned platforms harboring extremist content such as hate speech, misinformation, and conspiracy theories. </p>
<p>Additionally, over half (51%) of respondents stated that they would hold negative opinions of brands that advertised on platforms with little to no content moderation policies, attitudes that carry over to purchase intent.</p>
<h4>Context is king: AI&#8217;s downfall</h4>
<p>The lack of context in exclusion lists is a major issue for any form of AI.</p>
<p>Let’s take the word &#8220;shot&#8221; as an example. It could mean a shot of alcohol, a tremendous shot (as a sports reference), or more harmful meanings associated with weapons and crime. </p>
<p>The definition of words depends on the context in which they appear – and it is by accurately identifying this context that brands can bridge the gap between risk and opportunity. </p>
<p>There is no doubt a need for more flexibility, agility, and precise analysis that doesn&#8217;t rely on rudimentary, surface-level readings. However, this solution must be able to decipher how terms and phrases relate to one another. </p>
<p><script async="" src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="shutterstock_1371361877_1681819908726" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 1.51 / 1; max-width: 4000px; --canvas-wedge-error-size: 4000;" lazy="" class="error md sff-error" tabindex="0"></smartframe-embed></p>
<p>A solution growing in popularity to help decode webpage content is the use of <a href="https://smartframe.io/blog/how-can-we-reduce-bias-in-ai/" rel="noopener">AI (Artificial Intelligence)</a>. Yet while this approach can fast-track otherwise time-consuming processes, it can still overzealously block certain sites. </p>
<p>Until these methods are 100% foolproof – which they might never become – it’s worth maintaining a level of human verification to avoid missing out on opportunities, both for brands who lose valuable inventory and publishers who may struggle to monetize topical and newsworthy content. </p>
<p>Detailed metadata embedded into images can provide trustworthy context and further drive AI accuracy. This data can include a wide range of information, such as the location where the image was taken, the date and time it was captured, and details about the camera settings used to take the photo. </p>
<p>By making use of such information, advertisers and publishers can help establish better accuracy with AI-based content analysis.</p>
<h4>Key considerations when developing advertising campaigns</h4>
<p>The optimal approach to brand safety remains nuanced, so it’s worth noting some of the prevalent uncomfortable truths that characterize the ambiguity of the topic.<b></b></p>
<h5>Programmatic advertising might be quick but it carries more risk</h5>
<p>Algorithmic software has sped up the buying and selling of digital advertising placements, which means buyers cannot predict where ads will appear with complete certainty. However, considerable progress has been made here. </p>
<p>A study conducted in 2022 by <a href="https://go.integralads.com/industry-pulse-report-2022-us.html" target="_blank" rel="noopener">Integral Ad Science</a> has shown that 14% of US digital media professionals surveyed consider programmatic advertising to be vulnerable to brand risk incidents – a stark contrast to the 53% of respondents that shared the same views the year before.</p>
<p>To feel confident with programmatic buying, there needs to be complete transparency between brands, agencies, publishers, and technology vendors, as well as a thorough understanding of the technologies used, their capabilities, and their limitations.<b></b></p>
<h5>Controversy sells</h5>
<p>Unfortunately, most of society is guilty of being drawn to controversial topics, and this is a reality that brands must consider and weigh up. Creating extensive exclusion lists may do more harm than good if it comes at the cost of visibility, scale, and reach. </p>
<p>High demand for safe sites will also drive up the price of the known, legacy media sites. This is why ensuring the support of lesser-known minority media publishers, and a considered, nuanced approach to brand safety is important, supporting fresh perspectives and increasing reach.<b></b></p>
<h5>Change is constant</h5>
<p>Information can be published and disseminated very quickly online – about as rapidly as this same information can be refuted and identified as fake. Public opinion is constantly shifting, sped up by 24-hour news cycles that continuously bring new events and developments to light. </p>
<p>While the adage says there’s no such thing as bad publicity, going viral for your advertisements <a href="https://www.thetimes.co.uk/article/big-brands-fund-terror-knnxfgb98" target="_blank" rel="noopener">isn’t always a good thing</a>. Whatever strategies and solutions brands use need to be constantly monitored and ready to adapt, alongside a stable set of values and principles that they stand by to avoid being seen as capricious.<b></b></p>
<h5>Flawless ad safety is a myth</h5>
<p>Try as everyone might, there is a good chance that there will be a misstep along the way. It’s human to misinterpret something or for something to slip through the cracks. Having a response strategy in place for when mistakes occur is crucial.</p>
<h4>Mistakes will happen: how to best prepare for crisis situations</h4>
<p>Unfortunately, even the most preventable crisis can feel random when it strikes. An efficient response strategy will involve outlining detailed guidelines that enable teams to work quickly and efficiently as they address stakeholder concerns. </p>
<p>Brands need a carefully curated approach with enough space to pivot in response to unfolding events. And since no two companies are the same, there is no one-size-fits-all response strategy.</p>
<p>A &#8220;Revisit, Reset, Repeat&#8221; mentality is key; by examining the tools available, resetting for current and ongoing events, and repeating as the news cycle evolves, guidelines can be constantly assessed and optimized.</p>
<p>In the event of a crisis, people will likely turn to social pages for updates on how a company is responding, so guidelines on sharing public apologies are also vital. These can be informed by social listening to brand health topics to enable the constant monitoring of online discourse around the business. </p>
<p>But as response strategies vary across companies, so do individual social media platform features, each containing its own set of rules that require different approaches to maintaining company values.</p>
<p>For example, <a href="https://www.facebook.com/business/help/1926878614264962?id=1769156093197771" target="_blank" rel="noopener">Meta</a> offers its own brand safety controls that work across Facebook, Instagram, and Messenger. Twitter, meanwhile, provides technical and general advice, with various content-moderation features specific to the platform – although many advertisers have <a href="https://edition.cnn.com/2023/02/10/tech/twitter-top-advertiser-decline/index.html" target="_blank" rel="noopener">paused their ad spend</a> in recent times.</p>
<p>On the other hand, TikTok has made great improvements in creating a safe space for brands to advertise through its <a href="https://www.tiktok.com/business/en-US/brand-safety" target="_blank" rel="noopener">Brand Safety Center</a>, which provides regularly updated news and recommendations on brand suitability for marketers within the platform.</p>
<h4>Why is brand safety important for publishers?</h4>
<p>Publishers have slightly different priorities when it comes to building a premium brand-safe environment that other companies want to advertise in.</p>
<p>As owners and producers of content, there is a responsibility for publishers to analyze, understand, and organize this content in a clear way, avoiding misinterpretation, misstatements, or omissions of information – anything that might reduce revenue away by stopping advertisers from displaying ads on their websites. </p>
<p>Several factors, such as domain authority, viewability score, fill rates, and historical bid price, can influence advertisers’ decisions when placing their ads. Blocking invalid traffic, such as bot traffic, is also key to maintaining a high brand safety score. </p>
<p><script async="" src="https://static.smartframe.io/embed.js"></script><smartframe-embed customer-id="7d0b78d6f830c45ae5fcb6734143ff0d" image-id="shutterstock_510793918_1681820375747" theme="blog-new" style="width: 100%; display: inline-flex; aspect-ratio: 1.50263 / 1; max-width: 4000px; --canvas-wedge-error-size: 4000;" lazy="" class="error md sff-error" tabindex="0"></smartframe-embed></p>
<p>There&#8217;s also the issue of fake news, which has exploded into the digital consciousness and dominated news headlines with no signs of slowing down.</p>
<p>It is therefore in the publishers’ best interest to ensure a safe space for brands to advertise.</p>
<h4>Organized chaos or a journey to blissful duality?</h4>
<p>Ad placement is effective when it resonates positively with consumers. Unfortunately, when it comes to keeping brand reputation safe in the digital age, it isn’t just about ad content; it’s also about ad association. </p>
<p>Capitalizing on the ever-growing digital landscape is a complex process involving many variable factors. Ensuring brand safety requires careful analysis, not only of the brand itself, but of its messaging, the tools that build it, and the channels that deliver it to audiences.</p>
<p>This analysis includes the process of creating inclusion and exclusion lists for websites based on business objectives. Brands should carefully curate these lists and regularly review them to ensure they are up to date. </p>
<p>Nevertheless, the digital advertising ecosystem is constantly evolving. This makes it difficult for brands to stay on top of the latest trends and threats.</p>
<p>By bringing together different solutions in a custom suite, including contextual targeting which will play a more important role with the demise of the third-party cookie, staying diligent in monitoring ad placements and being prepared to evolve such strategies as the digital landscape continues to change.</p>
<p>Every element of a company’s existence and its interactions in the digital space informs consumers of its values, whether intentional or not – and this reality is more harmful when ignored.</p> 								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/everything-you-need-to-know-about-brand-safety-brand-suitability/">Brand safety: Everything you need to know about brand safety and suitability</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Online threats appear to be getting worse. But why?</title>
		<link>https://smartframe.io/blog/online-threats-appear-to-be-getting-worse-so-how-has-it-come-to-this/</link>
		
		<dc:creator><![CDATA[Matt Golowczynski]]></dc:creator>
		<pubDate>Wed, 11 Nov 2020 14:32:11 +0000</pubDate>
				<category><![CDATA[Image security]]></category>
		<category><![CDATA[News & Features]]></category>
		<category><![CDATA[brand safety]]></category>
		<category><![CDATA[disinformation]]></category>
		<category><![CDATA[image security]]></category>
		<category><![CDATA[misinformation]]></category>
		<guid isPermaLink="false">https://smartframe.io/?p=63584</guid>

					<description><![CDATA[<p>Images are behind many of today’s online threats, and big tech companies [&#8230;]</p>
<p>The post <a href="https://smartframe.io/blog/online-threats-appear-to-be-getting-worse-so-how-has-it-come-to-this/">Online threats appear to be getting worse. But why?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></description>
										<content:encoded><![CDATA[		<div data-elementor-type="wp-post" data-elementor-id="63584" class="elementor elementor-63584" data-elementor-post-type="post">
				<div class="elementor-element elementor-element-17fd25b7 e-flex e-con-boxed e-con e-parent" data-id="17fd25b7" data-element_type="container" data-e-type="container" data-settings="{&quot;ekit_has_onepagescroll_dot&quot;:&quot;yes&quot;}">
					<div class="e-con-inner">
				<div class="elementor-element elementor-element-7d68a8a7 elementor-widget elementor-widget-text-editor" data-id="7d68a8a7" data-element_type="widget" data-e-type="widget" data-settings="{&quot;ekit_we_effect_on&quot;:&quot;none&quot;}" data-widget_type="text-editor.default">
									<p class="blog-stand-first">
  Images are behind many of today’s online threats, and big tech companies are struggling to defeat them. But their failures are often compounded by our own actions. So what are they, and we, doing wrong?
</p>

<p>
  SmartFrame <a href="https://smartframe.io/blog/press-release-smartframe-technologies-appointed-as-an-associate-member-of-online-tech-safety-body-ostia/">recently joined the online safety tech industry body OSTIA as an associate member</a>, and will be working with the body’s members to raise awareness of the tools available to help protect people online.
</p>

<p>
  OSTIA’s formation earlier this year follows a <a href="https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper" target="_blank" rel="noopener noreferrer">white paper</a> released by the UK government that focuses on online harms. This paper details the various threats that internet users currently face, and outlines a new statutory duty of care – overseen by communications regulator Ofcom – that’s aimed at curbing various illegal and harmful activities.
</p>

<p>
  Online harms exist in many different forms, and these change over time, so a comprehensive discussion of them all would be well outside of the scope of this article. But given that images are at the heart of many of these threats, a closer look at how these are captured, viewed, published, downloaded and shared with others helps us to gain an understanding of where problems may lie. It’s not simply a case of needing to protect ourselves and those around us from harmful content; we also need to recognize how our own perfectly lawful actions can also end up being problematic.
</p>

<p>
  Threats can, of course, exist in all corners of the internet. But it&#8217;s easy to see that many of the most problematic threats we face are in part down to the fact that social media platforms, together with other major online players, have become victims of their own success. As their audiences have ballooned, they’ve struggled to keep up with the volume (and nature) of content posted on their channels. There&#8217;s little doubt that they are a lot safer than they would otherwise be without certain tools and processes that are in place, and progress continues to be made as threats evolve. Nevertheless, their failure to address more basic issues is worrying, and a key factor that allows these threats to continue.
</p>

<p>
  So what are these threats, and what risks are we taking? Where are we being failed by those who we expect to protect us? And how do we become more responsible internet users?
</p>

<h4>The threats we face</h4>

<p>
  Internet safety is a critical issue, not least because the vast majority of us are, in some way or another, online. In the UK, <a href="https://www.gov.uk/government/consultations/online-harms-white-paper/online-harms-white-paper" target="_blank" rel="noopener noreferrer">nearly nine in ten adults and 99% of 12 to 15 year olds are online</a>. In the US, <a href="https://www.statista.com/topics/2237/internet-usage-in-the-united-states/" target="_blank" rel="noopener noreferrer">over 85% of the population has access to the internet</a>, which equates to around 281m people. Within these are countless vulnerable people, from children and the elderly through to those who may be more susceptible to radicalization.
</p>

<p>
  Online threats, of some description, have always been with us. Even those of us who have been using the internet since it was first commercially available may find it difficult to remember a time when spam folders designed to collect suspicious-looking emails weren’t integrated into email services as standard, or when new PCs and laptops weren’t being bundled with anti-virus software. Scams and viruses are still with us, but as the internet has evolved, so have the dangers.
</p>

<p>
  <script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_341144132_1603720011751" data-width="100%" data-max-width="4000px" data-theme="captions-article-1"></script>
</p>

<p>
  Many of today’s threats exploit the things we do every day. Our smartphones and the images we take with them; the social media platforms we use; and the cloud-based services in which we store images and other files. These have rooted themselves in our day-to-day lives to the extent that we use them without much thought. And when we use them without much thought, we start to lose sight of the value they could have to someone without the best intentions.
</p>

<p>
  Threats that concern images can typically be sorted into two categories, namely threats based on images we create ourselves and those based on images created elsewhere that are in some way harmful. As may be expected, the most serious image-based threats discussed in the white paper typically concern some type of pornography, such as images depicting child nudity and exploitation, as well as extreme porn and revenge porn. Other threats mentioned that often involve images include harassment and cyberbullying, as well as content that incites violence or hate crimes, or that promotes terrorism or other illegal activity.
</p>

<p>
  The paper also highlights the problem of perfectly legal content and platforms intended for adult audiences being accessed by children, from pornography to dating apps. The complexity in adopting effective age-verification systems for legal pornographic websites has meant that plans to introduce this in the UK <a href="https://www.theguardian.com/culture/2019/oct/16/uk-drops-plans-for-online-pornography-age-verification-system" target="_blank" rel="noopener noreferrer">were recently abandoned</a>. Similar systems are on the cusp of being introduced elsewhere (<a href="https://www.politico.eu/article/france-to-introduce-controversial-age-verification-system-for-adult-pornography-websites/" target="_blank" rel="noopener noreferrer">such as in France</a>), although technical problems and privacy concerns mean that it remains to be seen whether these will be successful.
</p>

<p>
  <script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_1716020299_1603451858031" data-width="100%" data-max-width="1423px" data-theme="captions-article-1"></script>
</p>

<p>
  Together with the growing issue of misinformation – which, most problematically, concerns deliberate acts of disinformation – one can start to appreciate just how multi-faceted the issue of online harm is, and how crucial images are to the effectiveness of many of these threats. But what’s made these such a significant issue today?
</p>

<h4>How the situation has escalated</h4>

<p>
  Various factors have come together to make this the current reality. One of the most significant of these is the ease with which images can be taken by the average person. Standalone cameras have long been affordable, but technology has moved on to the point where these no longer need to be bought separately, given the proliferation of cameras inside smartphones, tablets and computers.
</p>

<p>
  Image-editing software, which can be used to manipulate photos for all kinds of malicious purposes, is free, easily downloadable as an app, or even bundled into social media platforms. The overwhelming majority of picture taking and editing is done with no malice, of course, but, as we shall see, such images can also be at the heart of many harms.
</p>

<script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_515251183_1603714308456" data-width="100%" data-max-width="5760px" data-theme="captions-article-1"></script>

<p>Another factor is the increasing ease with which images can be shared with others. Social media platforms rely on user-generated content and activity as this allows them to understand their audience and sell advertising, while also helping to attract new users. So it’s no surprise that they have spent years making changes to algorithms, user interfaces, integrations, notifications and discoverability; sharing images is more pain-free than ever, but we rarely stop to consider who the real beneficiary of all these changes has been.</p>

<p>The third key factor is anonymity. While illegal and problematic images are also shared outside of social media platforms, the ability to participate in harmful activity while remaining anonymous has allowed individuals – and, more insidiously, groups posing as an individual – to leverage these platforms and target existing users with propaganda, or other harmful content. <a href="https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf" target="_blank" rel="noopener noreferrer">As a paper produced on behalf of Ofcom highlights</a>, one thing that allows this to happen is the fact that most online communications are asynchronous; when an individual cannot see a response to a message or another communication, they don&#8217;t see a negative emotional reaction to it, which can encourage them to act in a less inhibited manner. It&#8217;s easy to forget that social media platforms do not require any kind of proof of identity when opening an account, such as a copy of a passport or a driving license – an email address will usually suffice.</p>

<p>Encrypted messaging apps that allow for individuals to securely send and receive images and videos have also risen in prominence in the past few years, their popularity being a logical consequence of years’ worth of focus on online privacy issues. These can be vital to journalists and others who may have a legitimate need for sensitive communication but, as we might have expected, their security has been exploited by criminals. Telegram in particular has been frequently cited in reports on the most prominent terrorist attacks in recent years.</p>

<h4>It doesn’t concern me – does it?</h4>

<p>Publishing extreme and obviously illegal content is one thing, but the overwhelming majority of online users do nothing of the sort, and comply with both the law and the terms of the platforms they use. Nevertheless, many everyday images that appear completely innocuous can be used for harm, whether the image owner is the intended victim or not.</p>

<p>Social media channels are an obvious place to discover and steal personal images. While threats exist across different platforms, Facebook appears to attract the most criticism, and there are many reasons for this. In November 2021 it was the <a href="https://www.statista.com/statistics/1201880/most-visited-websites-worldwide/" target="_blank" rel="noopener noreferrer">third most visited website globally</a>, and the <a href="https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/" target="_blank" rel="noopener noreferrer">most popular social media platform</a> in terms of monthly average users. The fact that people connect with their friends and family here means that the nature of the content shared on it is more personal than on platforms such as Twitter and YouTube. But it&#8217;s the diversity of the platform that makes it particularly unique.</p>

<p>At a basic level, there&#8217;s the personal information and images that are openly shared with connections (or publicly), which can be used to harm the individual. On top of this, Messenger allows for threats to take place privately, while Groups <a href="https://www.wired.com/story/facebook-groups-are-destroying-america/" target="_blank" rel="noopener noreferrer">allow harmful ideas to reach new audiences when made public</a>, <a href="https://www.bbc.co.uk/news/blogs-trending-49902321" target="_blank" rel="noopener noreferrer">and to reverberate when set to private</a>. <a href="https://www.cnet.com/how-to/how-to-protect-yourself-when-using-facebook-marketplace/" target="_blank" rel="noopener noreferrer">Facebook&#8217;s Marketplace can attract scams</a> and other threats with some kind of financial goal, while issues with monitoring the live-streaming Live service <a href="https://uk.reuters.com/article/facebook-extremists/facebook-restricts-live-feature-citing-new-zealand-shooting-idUSL5N22R05J" target="_blank" rel="noopener noreferrer">became clear</a> after last year&#8217;s mass shooting in Christchurch, New Zealand. Other social media networks may have one or more of these elements, but it&#8217;s the fact that Facebook offers everything under one roof that forces it to contend with a diverse range of criminal behavior.</p>

<p>There are many reasons why someone may wish to steal images from someone’s account. A common scam on Facebook, for example, sees a duplicate profile of an individual created by an impersonator, who then sends friend requests to that user’s friends under the pretense of this being a new, but genuine, account. Once their request is accepted, the impersonator has the same level of access to the acceptor’s account as other friends of the original user, which may include access to images, friend lists and personal profile information. They also have the ability to message that person’s friends and family in an attempt to extract personal information.</p>

<p><script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_1555297913_1603718449169" data-width="100%" data-max-width="7692px" data-theme="captions-article-1"></script></p>

<p>Similar impersonations on Twitter see prominent users having their identity cloned in order to send spam or links to malware, or to promote counterfeit goods, with the target audience being followers of the genuine account. Even just a profile image may be enough to deceive other users (and at the time of writing, this image is available to view and download from accounts that have been set to private, or that have blocked malicious accounts from interacting with them or reading their tweets).</p>

<p>Tools for reporting such attacks when they’re noticed have been a part of social media platforms for some time, but a recent news article details a more sinister type of threat that can&#8217;t be detected in the same way. Celebrities have traditionally been the subject of fake pornographic images, and more recent deepfake videos, but <a href="https://www.bbc.co.uk/news/technology-54584127" target="_blank" rel="noopener noreferrer">a recent story</a> confirms that such a threat is no longer confined to those in the public eye. The story details how, since July 2019, over 100,000 images of women were harvested from social media sites and treated with AI tools to create fake nude images, before these were posted on Telegram. While the effectiveness of this technology in creating convincing images has been questioned, this technology will only improve in the future – and the fact that these are being published in encrypted channels means the likelihood of the victim ever discovering them are small.</p>

<p>These are just three examples of image-based threats that exist today, and key to them all is the theft of images from a social media profile. Their effectiveness depends on it. The platforms from which these images are taken decide how easily such images can be stolen, which also means they play a role in determining how problematic these threats can become.</p>

<p>Sharing images of ourselves is one thing, but things can become more problematic when we choose to share images of others. Social media being what it is, it’s difficult not to do this when we’re keeping friends and family updated on our lives, but the more that’s shared, the better we need to understand the security measures available to us.</p>

<p>In 2015, Australia’s Children’s eSafety Commissioner <a href="https://www.smh.com.au/national/millions-of-social-media-photos-found-on-child-exploitation-sharing-sites-20150929-gjxe55.html" target="_blank" rel="noopener noreferrer">revealed</a> that of the 45m images discovered on a single child exploitation site, around half appeared to be innocent images that were sourced from social media platforms such as Kik, Facebook and Instagram. Even if there was nothing inappropriate about the images themselves, they were said to have been frequently accompanied by comments that sexualized the subjects within them, and categorized into folders by the subjects’ appearance, age or another attribute.</p>

<p><script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_695029054_1603463469378" data-width="100%" data-max-width="4523px" data-theme="captions-article-1"></script></p>

<p>Family blogs maintained by parents without sufficient knowledge of online safety matters were also highlighted as a potential problem for the same reason, and the issue of digital literacy becomes more important as the age of those charged with supervising children rises. Poor knowledge of online risks among <a href="https://www.nzherald.co.nz/lifestyle/how-grandparents-low-digital-literacy-could-be-harming-your-kids/BVN3GQ5O3Q62FBJSFD4IJCIWCI/" target="_blank" rel="noopener noreferrer">elderly internet users</a> creates enough problems of its own for that demographic, but it presents additional dangers when these users have children in their care, as the usual restrictions that may be in place at home on smartphones, tablets and computers may not be enabled on devices belonging to these users.</p>

<p>One would think that as these platforms have grown, and have greater resources for tackling illegal content, the dangers would be minimized. But if we go by volume, it only appears to be getting worse. In 2014, there were just over 1m reports globally of images concerning child exploitation. Last year, the New York Times <a href="https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html" target="_blank" rel="noopener noreferrer">reported</a> that there had been 18.4m reports worldwide concerning indecent images and videos of children online – double the amount of cases in the previous year. The article also mentions that those familiar with the reports claimed that 12m of these cases concerned Facebook’s Messenger platform. <a href="https://transparency.facebook.com/community-standards-enforcement#child-nudity-and-sexual-exploitation" target="_blank" rel="noopener noreferrer">Facebook’s own reported data</a> shows it had acted on a total of 37.4m individual pieces of content that concerned child nudity or sexual exploitation, and the figures from the first two quarters of 2020 show a marked increase.</p>

<h4>The dark reality of keeping us safe</h4>

<p>Between the law, social platforms’ own terms of use, and common sense, the average user shouldn’t have too many problems understanding what kind of content can and cannot be shared online. When it comes to stopping the spread of problematic content, progress has no doubt been made over the years, partly in response to threats that have grown in prominence and partly in response to pressure from lawmakers (particularly over the last few years, as the dangers of disinformation and political interference have become more prevalent).</p>

<p>But in some cases, the specific content within an image would land it in something of a grey area, and a decision would be more down to a judgement call than anything else. Today, such decisions are carried out on social media platforms by a mixture of artificial intelligence and human moderators. The former may still be in its infancy, but is said to be adept at tackling nudity, to the tune of being able to <a href="https://www.theguardian.com/technology/2020/jun/17/not-just-nipples-how-facebooks-ai-struggles-to-detect-misinformation" target="_blank" rel="noopener noreferrer">correctly identify and automatically remove 99.2% of offending images</a>. Nevertheless, some level of human moderation is still required for other types of problematic content – and as this has become more of an issue, reports of the effects of this content on moderators have made for disturbing reading.</p>

<p>Last year, <a href="https://www.theguardian.com/technology/2019/sep/17/revealed-catastrophic-effects-working-facebook-moderator" target="_blank" rel="noopener noreferrer">The Guardian reported</a> that contractors tasked with moderating content on Facebook claimed to have witnessed colleagues becoming addicted to extreme graphic content and hoarding it for themselves (a claim Facebook denies), as well as being influenced by the hateful, far-right material they were supposed to be vetting.</p>

<p><script src="https://embed.smartframe.io/7d0b78d6f830c45ae5fcb6734143ff0d.js" data-image-id="shutterstock_770840635_1605024326573" data-width="100%" data-max-width="4500px" data-theme="captions-article-1"></script></p>

<p>Earlier this year, <a href="https://www.theverge.com/2020/5/12/21255870/facebook-content-moderator-settlement-scola-ptsd-mental-health" target="_blank" rel="noopener noreferrer">Facebook paid $52m to moderators</a> who had claimed their work has led them to develop mental health issues, with some claiming to have experienced symptoms of post-traumatic stress disorder (PTSD). <a href="https://www.bbc.com/news/technology-51245616" target="_blank" rel="noopener noreferrer">As the BBC reported at the time</a>, one contractor who recruits such moderators had started to ask workers to sign a form acknowledging that they understood this work could lead to PTSD.</p>

<p>This issue is not new; Facebook has <a href="https://www.bbc.co.uk/news/technology-45639447" target="_blank" rel="noopener noreferrer">previously been sued</a> for similar reasons. Nor is it specific to Facebook; Microsoft <a href="https://thenextweb.com/microsoft/2017/01/12/microsoft-sued-by-employees-who-developed-ptsd-after-reviewing-disturbing-content/" target="_blank" rel="noopener noreferrer">faced a similar lawsuit</a> back in 2017, while YouTube is <a href="https://www.scribd.com/document/476939218/Moderator-complaint-against-YouTube#from_embed" target="_blank" rel="noopener noreferrer">currently being sued</a> by a former moderator who claims that it had failed to “provide a safe workplace for the thousands of contractors that scrub YouTube’s platform of disturbing content.”</p>

<p>Confidentiality agreements may explain why we haven&#8217;t heard more about this issue. During a <a href="https://twitter.com/FBoversight/status/1320758578886578179" target="_blank" rel="noopener noreferrer">Periscope stream</a> with The Real Facebook Oversight Board, one ex-moderator who moderated content on behalf of Facebook explained: &#8220;I think the biggest problem is NDAs, which can be held over your head &#8230; which can make it difficult to speak out about anything.&#8221; This raises an intriguing question: without reports of these lawsuits, would we know anything about this at all?</p>

<p>Clearly, a balance needs to be struck between human and AI moderation so that the general public is sufficiently protected without it creating such severe issues for a handful of individuals. But what are the chances of AI being able to take over completely?</p>

<p>Facebook, along with Twitter and YouTube, appears to have <a href="https://www.washingtonpost.com/technology/2020/03/23/facebook-moderators-coronavirus/" target="_blank" rel="noopener noreferrer">relied more on AI during the ongoing pandemic</a>. <a href="https://www.bbc.co.uk/news/technology-45639447" target="_blank" rel="noopener noreferrer">It has previously stated</a>, in response to another lawsuit, that it wanted to move towards this model. This was two years ago, and the fact that human moderators are still being used suggests that either the technology isn’t quite there yet or that the threats are changing too rapidly (or, more likely, a combination of the two). Furthermore, while a deeper shift towards AI sounds like a positive solution for moderators, concerns remain. &#8220;My guess is that as AI gets better at recognizing patterns in stuff that&#8217;s constantly posted &#8230; we&#8217;ll just get more of the extreme borderline content and have to make harder decisions more often,&#8221; the ex-moderator stated.</p>

<p>Another ex-moderator on the same live-stream agreed. &#8220;At the time [AI] didn&#8217;t seem like the best indicator as to what was violating or not. It felt like job security for sure &#8230; As soon as you recognize the pattern, the internet will change the pattern. It&#8217;ll work and get better, but there will always be borderline [content]. We&#8217;re good at that as people. The algorithm and bots are only going to do so much. You&#8217;ll still need a content moderator – always.&#8221;</p>

<p>Whatever ratio of AI and human moderation is used, the reality is that, right now, people who we have never met are watching some of the most disturbing content online to ensure it gets nowhere near us. These platforms may claim to support these individuals, but these lawsuits, together with the comments from ex-moderators, indicate that this is something they are still grappling with.</p>

<h4>How social platforms are making things worse</h4>

<p>Preventing images that shouldn’t exist from reaching us is one thing, but freely providing tools for downloading other people’s lawful images only compounds the problems online users face.</p>

<p>Anyone using Facebook will usually see a Download button next to images posted by friends and other accounts, and images that do not have this (because of privacy settings) can often still be stolen with conventional means. It’s not even necessary to be someone’s friend to have access to this control on their images. While security options can be customized, it&#8217;s possible to download images from profiles that are searched for, or when a connection shares someone else&#8217;s image. At the time of writing, default privacy settings give friends of friends the same kind of access to this content that friends have, and the privacy tour that new users are invited to take to better understand the settings available to them is entirely optional.</p>

<p>Even if no malice is intended by downloading such an image, the presence of such a control shows little regard for the protection of an image owner’s content or their copyright. While UK and US copyright laws both detail a handful of scenarios in which such images from others may be copied and used, the fact that there are only a handful of legitimate exemptions that fall into this category makes the provision of this button puzzling.</p>

<p>Such issues are problematic on other platforms too, albeit to a lesser extent. Images cannot be right-clicked or downloaded with a dedicated control from (Facebook-owned) Instagram, for example, although screenshots are possible and these images are easily accessible in the page’s source code. Twitter also doesn’t have a download button of any sort, but right-clicks, drag-and-drop saving and screenshots are all allowed.</p>

<p>Photo-hosting site Flickr has similarly problematic controls. At the time of writing, it displays a license type underneath images uploaded to its platform, and the default option is All Rights Reserved. This, <a href="https://www.flickr.com/help/terms" target="_blank" rel="noopener noreferrer">as it explains</a>, means that:</p>

<p><i>“You, the copyright holder, reserve all rights provided by copyright law, such as the right to make copies, distribute your work, perform your work, license, or otherwise exploit your work; no rights are waived under this license.”</i></p>

<p>And yet, just to the side of this license is a &#8220;download this photo&#8221; button. Not only that, but clicking on this presents a range of image sizes to choose from.</p>

<p>To its credit, Flickr does support a range of other licensing options and allows users to prevent direct downloads in the manner described above. A ban on right-clicks and drag-and-drop actions do not allow the image to be saved from this page, but protection over screenshots is absent and the image is still available in the page’s source code. Even if users do choose the more secure option, finding a page with an image in a range of sizes is straightforward enough. Anyone intent on stealing such an image can do so without much effort.</p>

<h4>Mobile challenges</h4>

<p>Images are, of course, stolen from other channels outside of social media sites, and this problem extends to the mobile devices we use.</p>

<p>Some of the most common browsers used on smartphones and tablets, from Google Chrome and Microsoft Edge through to Firefox and Brave, have a context menu that appears upon a long press on an image, one that looks much like a right-click popup on a computer. While these menus differ slightly from one another, the option to download the image directly – or at least open it in a new tab where it’s isolated from other page content – is typically provided among these options. What are the chances that someone is using this to download their own image versus the chance that it’s being used to download someone else’s image without their permission or knowledge?</p>

<p>Even if browsers prohibit these kinds of actions, the option to screenshot images on mobile devices will typically be available. As users of Snapchat, and certain banking and payment apps, may already know, screenshot detection and/or blocking has been around for some time, although it can be circumvented and is lacking from many of the apps that would particularly benefit from it.</p>

<p>Dating apps are an obvious example. While the threats associated with these have traditionally centered on the physical dangers of meeting strangers in person, image theft brings with it additional problems that don’t necessarily rely on any physical contact.</p>

<p>These include catfishing, which typically sees a new profile set up with stolen images in a bid to gain user’s trust and divulge personal or sensitive information (which obviously includes images). Earlier this year, it was reported that over <a href="https://gizmodo.com/70-000-tinder-photos-of-women-just-got-dumped-on-a-cybe-1841043456" target="_blank" rel="noopener noreferrer">70,000 images were scraped from Tinder</a> and found on cyber-crime forum, for unknown reasons. This came less than three years after a user claimed to have exploited <a href="https://techcrunch.com/2017/04/28/someone-scraped-40000-tinder-selfies-to-make-a-facial-dataset-for-ai-experiments/" target="_blank" rel="noopener noreferrer">Tinder’s API to scrape 40,000 images</a> in order to create a facial dataset. Incidentally, these figures come nowhere near the 3bn or so images that were said to have been scraped by a start-up <a href="https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html" target="_blank" rel="noopener noreferrer">from Facebook, YouTube, Venmo and other sites for the same reason</a>.</p>

<p>These are extreme examples, but the ability to scrape this quantity of images in one go affects enough people for isolated incidents to be significant issues. More everyday image theft will typically be carried out by conventional image saving or screenshots, and at the time of writing, the majority of popular dating apps, such as Hinge, Tinder, Bumble, Plenty of Fish and OkCupid do not notify users when someone has taken a screenshot of their image. One notable exception, however, is Grindr, which recently introduced this as an optional setting.</p>

<h4>Taking responsibility</h4>

<p>Any kind of solution to these issues must balance security with practicality. Suggesting that people simply stop posting images online, or never use dating apps, is not the answer. But drawing their attention to the fact that they can never be completely sure that an image posted online or through an app only exists where it was originally published may be a sobering enough thought for them to reconsider what they share, and how they share it, in the first place.</p>

<p>So, unless images are published <a href="https://smartframe.io/image-security/">in a way that provides robust protection against being downloaded</a>, the questions to be asked are: What is being posted? Where is it being posted and how? How easily can it be seen by others and stolen? And does the person who is publishing this content understand the possible risks in doing so?</p>

<p>While many would be reluctant to give up using social media platforms they&#8217;ve become accustomed to, at the very minimum, it’s a good idea to review the current safety tools on offer. These change over time, and it’s easy to overlook new features that may protect accounts and their users as they are introduced, so it’s worth checking current documentation to understand how secure your social media accounts actually are.</p>

<p>Running through friend lists for any duplicate contacts, or others that in some way don’t look right, is also a good idea; you may well trust the hundreds of connections you have on a social media account, but it only takes one account being compromised for problems to start. Noticing spam being posted by a friend suggests their account has been subject to such an attack, which means it might be best to report the activity and unfriend them, and to notify them of this through a different channel.</p>

<p>It may also be worth checking older inactive social media accounts that may still host images. If you’re particularly concerned about a specific image and its availability online, you may wish to perform a reverse search for it through Google Images to see if it can be found somewhere online.</p>

<p>Changing passwords on a regular basis is also a good idea, as is using different passwords across different accounts and enabling multi-factor authentication where possible. The password-saving options within many browsers can be used to save longer and more complex passwords, although third-party tools are also available for this.</p>

<p>If you have a tendency to use the same passwords across multiple sites, or have done so in the past, it’s worth investigating whether your password details may have been leaked at some point. <a href="https://haveibeenpwned.com/">Have I Been Pwned</a> allows you to check whether your email address and other personal information may have got out in a historic data breach.</p>

<h4>Final thoughts</h4>

<p>Combating the threats outlined above is a significant and complex challenge. They differ from each other with respect to their targets, the nature of their operation, and the intended consequences. The fact that many of these may be considered to be harmful without quite straying into illegality makes them all the more difficult to police. But when we consider just how damaging these can be, and how many people they stand to affect, few would argue that the protections we have right now are sufficient.</p>

<p>No single solution will prevent or stop every threat, and new measures aimed at curbing these often raise questions that aren’t easy to answer. To what degree should we expect age verification systems to work in practice? How can developers of encrypted messaging apps provide their services while complying with the authorities? How do we define hate? And who gets to define it?</p>

<p>Success will depend on a number of factors. Educating online audiences – particularly younger users – on threats and making sure that everyone understands the tools available to combat them is important. Effective enforcement of codes of conduct from regulators will also be key. The ongoing development of a <a href="https://smartframe.io/blog/content-authenticity-initiative-what-you-need-to-know/">new content attribution model</a>, which promises to deliver greater transparency over online content courtesy of the <a href="https://contentauthenticity.org/" target="_blank" rel="noopener noreferrer">Content Authenticity Initiative</a>, is also very encouraging.</p>

<p>It&#8217;s also conceivable that, as AI improves, smartphone manufacturers may be required to use these tools to detect images that are likely to be problematic as soon as they are captured. Such a move would no doubt be difficult, and would be met with plenty of opposition and privacy concerns, but when you consider the accuracy with which AI tools are currently able to detect nudity, it&#8217;s easy to see things moving in this direction.</p>

<p>But unless the ease with which images can travel online in the way they currently do is addressed, many threats will remain. Prevention, as the maxim goes, is better than the cure, and too little has been done to stop image theft in the thirty or so years since the internet has been commercially available. Now, we’re seeing the consequences.</p>
								</div>
					</div>
				</div>
				</div>
		<p>The post <a href="https://smartframe.io/blog/online-threats-appear-to-be-getting-worse-so-how-has-it-come-to-this/">Online threats appear to be getting worse. But why?</a> appeared first on <a href="https://smartframe.io">SmartFrame</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
