top of page

TARGETED GROWTH

"Instagram's 2019 algorithm update has a lot of exciting stuff. 

Either get destroyed by it... or use it to your advantage".

0 to 15 million+

Facebook likes

0 to 6 million+ 

Instagram followers

4 billion+ social views

mememe_edited.png

Our Work:

icons8-instagram-512 (2).png

0 to 6 million+ Instagram followers

icons8-facebook-512.png

0 to 15 million+ Facebook likes

icons8-trophy-96.png

4 billion+ social views

As seen on...

Co-Founder / @jamesshamsi

What 2019's Instagram Algorithm Update Means

20953092_10211756457261346_5599501255282

"First, sexualized pictures are going to to get less reach on Explore. On the surface this is bad news for models, however we have a few different ways you can fight back. 

Second, violence related memes are also going to be given less reach on Explore too. 


Instagram says, “We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines.” That means sexually suggestive content t could still get demoted, even if they are within guidelines.

Similarly, even memes not containing hate speech or harassment, but still considered in "bad taste" from IG, in terms of being lewd, violent or hurtful, could get fewer Explore page views. Specifically, Instagram says, “this type of content may not appear for the broader community in Explore or hashtag pages".  For influencers who rely on going viral to grow this is bad news". 

As seen on...

1280px-Yahoo!News_Logo.svg.png
1280px-Chicago_Tribune_Logo.svg.png

Co-Founder / @heyimadam

1280px-Yahoo!News_Logo.svg.png

Example Non-Recommendable Content

Instagram’s product lead for Discovery, Will Ruben, said “We’ve started to use machine learning to determine if the actual media posted is eligible to be recommended to our community” . Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify.

Sexually Suggestive

Graphic/Shocking

5eb13ea7855dfe6a3365be1fb417e353.jpg
inline_image_preview.jpg

Violence

39FB172C00000578-3896880-image-m-18_1478
WhatsApp Image 2019-04-07 at 4.23.22 PM.

Co-Founder / @vikpathak

As seen on...

5847e9aacef1014c0b5e4828.png
Bitcoin_Magazine_Logo_2017.png
original.png

"So, how is Instagram's algorithm detecting these "inappropriate" things? ​ It's not just one way, but one example is through simple image and video detection. You can actually see this yourself in real-time really easily. First, head to Facebook and right-click on any picture, then select "Inspect". From there you'll see a bunch of stuff appear, look for the line: "alt =" Image may contain:". This will give you a general but powerful and quick understanding of image recognition technology like this works. See an example with model @annelisejr below, with Facebook detecting 1 person standing, indoors taking a selfie.

Anchor 1
  • All hiking, cycling and fitness trails within a 75 mile radius, 
     

  • All Equinox's, 24/hr Fitness's and gyms in general a 75 mile radius, 
     

  • All healthy eating spots in a 75 mile radius.

Example: For a fitness store we could target and test people posting content in:

Instagram says these "inappropriate" posts won’t be fully removed from the feed, and Instagram told TechCrunch that the new policy won’t impact Instagram’s feed or Stories bar.

However,  Facebook CEO Mark Zuckerberg’s November manifesto showed the companies long-term stance in reducing the reach of the so called “borderline content,” which for Facebook would mean being shown lower in News Feed.

That policy could easily be expanded to Instagram in the future. That would likely reduce the ability of creators to reach their existing fans, which can impact their ability to monetize through sponsored posts or direct traffic to ways they make money like Patreon.

- EMPOWERING MODELS VIA INSTAGRAM -

Instagram-Recommendable-Content.jpg

Facebook’s Henry Silverman said that: “As content gets closer and closer to the line of our Community Standards at which point we’d remove it, it actually gets more and more engagement. It’s not something unique to Facebook but inherent in human nature.” The borderline content policy aims to counteract this incentive to toe the policy line. Just because something is allowed on one of our apps doesn’t mean it should show up at the top of News Feed or that it should be recommended or that it should be able to be advertised,” said Facebook’s head of News Feed Integrity, Tessa Lyons. This all makes sense when it comes to clickbait, false news and harassment, which no one wants on Facebook or Instagram. But when it comes to sexualized but not explicit content that has long been uninhibited and in fact popular on Instagram, or memes or jokes that might offend some people despite not being abusive, this is a significant step up of censorship by Facebook and Instagram. Creators currently have no guidelines about what constitutes borderline content — there’s nothing in Instagram’s rules or terms of service that even mention non-recommendable content or what qualifies. The only information Instagram has provided was what it shared at today’s event. The company specified that violent, graphic/shocking, sexually suggestive, misinformation and spam content can be deemed “non-recommendable” and therefore won’t appear on Explore or hashtag pages.

Instagram also has announced it is flagging content "borderline" and "inappropriate" if it is found to be spammy in nature or spreading fake news. See examples below: 

Instagram-Non-recommendable-Misinformati
Instagram-Spam-Non-Recommendable-Content

So, how can you benefit from the new algorithm?

Apply for a free Instagram growth consultation below.

- EMPOWERING MODELS VIA INSTAGRAM -

RECENT PRESS:

Screen Shot 2019-05-29 at 1.59.02 PM.png
Screen Shot 2019-05-29 at 2.08.02 PM.png
bottom of page