Tangentially related but getting on my nerves a lot recently:
Ad targeting for stuff that I've just bought. Worst offender: Remarkable - really good product, love mine, but I'm still seeing their ads everywhere. I'm not going to buy a second one.
It's almost confidence-inspiring. Like how evil-overlord can this be if they can't even get this basic stuff right?
> Like how evil-overlord can this be if they can't even get this basic stuff right?
Imagine living with schizophrenia in this era. It's a nightmare.
You're primed to see connections (and threats), even when there is no plausible mechanism to create the connection. In earlier years you could reason yourself out of it by saying "that is not technologically possible" or "that is not cost effective".
Nowadays you can end up with insane ideas like, "my forum posts are being de-anonymized via stylometry and someone is broadcasting threatening messages using language that mirrors my recent post." It's insane and unrealistic, but technologically possible. That makes it harder for a schizophrenic to dismiss.
When websites adapt to your behavior, it just gets worse. A/B testing isn't just annoying, it feels threatening. That's why HN is great.
Using anonymizing tools is one way to cope. It helps you avoid feeling "targeted".
> "my forum posts are being de-anonymized via stylometry and someone is broadcasting threatening messages using language that mirrors my recent post
I wouldn't be surprised if that's actually happened to someone, given how intense forum wars and stalkers can get. Especially if it's a group targeting an individual.
There is however the effect that people who leap straight to extraordinary claims for which they get publicity are almost certainly wrong, while the stalking victim to which this stuff has actually happened cannot get anyone to listen.
> while the stalking victim to which this stuff has actually happened cannot get anyone to listen.
Which creates a rare and dangerous situation where a high functioning schizophrenic (someone who has developed coping mechanisms, such as reality testing) will actively dismiss the warning signs of stalking. Possibly for years. As will the people in the schizophrenic's support system, people accustomed to helping them through paranoia.
I have no doubt this has occurred in the past. Tech puts concentrations of neuro-atypical people in close proximity to groups of ruthless status seekers (who I suppose are more likely to participate in the group activities you mention).
The long term consequences must be absolutely devastating.
I suspect the % of people who buy a second one for themselves, their spouse, kids, friends, etc. (or recommend someone else to buy one) is higher than the % of other people who buy one for the first time. Neither number is large but as long as the former one is slightly larger it makes sense to pay money for that advertising.
I strongly suspect you're correct, but I still think it's terrible targeting. The "sell to people who already bought one" approach is only better when compared against awful targeting techniques. And it kind of misses the point anyway. For a niche product, a market that already knows about you is pretty much always going to be more likely to buy from you than a market that has never heard of you. But part of the job of an advertiser is to expand the market that has heard of you, not just wallow in it.
Think of it this way: if I'm building a video recommendation system, I guarantee I can get much better click rates over random recommendations if instead I recommend you random videos only from your watch history. Heck, honestly I can probably boost click rates over almost any existing recommendation system by taking that system and filtering it so it only shows suggestions from channels in your watch history. People are repetitive.
But is that system good? Is it innovative or useful for a recommendation be constantly asking you to re-watch videos from your history? Is that actually fulfilling any of the promises that the recommendation system was sold on? No, not really. I definitely wouldn't call that a 'smart' system.
And if I had a company that had invested years of research into building an ad targeting system, and the result was just "sometimes people repeat themselves" -- that seems like a ton of wasted effort in order to get a pretty underwhelming result.
Targeted ads are sold to consumers as an unobtrusive way to learn about new products (whether that's an honest representation of the system is a separate question). So targeting of this nature is still a pretty big failure, even if it does increase clicks.
While I agree that much of targeted advertising is very intrusive and not accurate, it's still very much cost efficient compared to anything.
Initially, acquisition cost are there, either by generating SEO content or marketing or lead generation.
Retargeting is one step beyond that, off course if you get ads for a car you've just bought, it's pretty bad. The ads should display aftermarket parts for that car or insurance offers, but these wouldn't come from the same vendor.
At least it leaves some branding impact.
Or if it's food items, it had led me to buy the same things a few times again.
Source, very good friend is head of online advertising in a company.
He swears up and down, it can be a very good product if you have a large online audience.
>But part of the job of an advertiser is to expand the market that has heard of you, not just wallow in it.
Why do you assume so strongly that they're not doing both. Advertising isn't a zero sum game or something you can only do a limited quantity of. You advertise on all channels that bring you a positive ROI. The ROI is based on the cost of advertising, effort of advertising (ie: $ spent on human labor) and the generated profit from advertising. Assuming that number is positive you should advertise with that channel on top of everything else you do.
> Why do you assume so strongly that they're not doing both.
That's a good question. I guess I do assume they're doing both, but the extreme degree to which they lean on repeat targeting, to the point where it's this noticeable, and to the point where repeat targeting overwhelms other advertisers to this degree, still means that they're incredibly bad at expanding their market.
Let's look at Youtube again. Youtube can both recommend new videos and old videos. It can do both at the same time. But if the majority of my feed is recommending repeat videos, that signifies to me that they're not very confident about the main part I care about as a consumer: showing me knew things.
Looking back at what OP says in the linked article:
> I don't mind letting your programs see my private data as long as I get something useful in exchange. But that's not what happens.
> the dirty secret of the machine learning movement: almost everything produced by ML could have been produced, more cheaply, using a very dumb heuristic you coded up by hand, because mostly the ML is trained by feeding it examples of what humans did while following a very dumb heuristic. There's no magic here.
I still don't think it's all that impressive to increase ROI using this kind of simplistic heuristic. More than that, I don't think it justifies the incredibly large amount of money and infrastructure that gets devoted to ad targeting online. If some of the most prominent parts of a gigantic system can be replicated by a couple of purchasing history lookups in a database.
It would be impressive for them to increase advertising ROI in a way that increases value for consumers, or that justifies the tracking mechanisms or the extensive research and AI poured into the system, or in a way that is giving better ROI than ridiculously naive approaches, but repeat targeting doesn't seem to do any of that as far as I can tell.
TLDR I assume they're trying to do both, but the proportion of repeat advertising indicates to me that it's giving them a lot better results than more sophisticated methods, which (to me) indicates that their sophisticated methods are all bad. If better targeting methods existed, repeat targeting wouldn't be so prominent and obvious because it would be getting crowded out by other targeting techniques.
No one thinks that showing the same ad to the same person 100 times is effective. It does, however, make more money for the ad seller for ad customers to waste spending.
Much of ad targeting is not automatic, but manually configured by advertisers in one way or another. Many of these users manually configure their ads poorly. If I tell the system "show this ad to everyone in this geography with no restriction on frequency" the only 'smart' targeting is the stuff that restricts it to users that appear to be in that geography. Everything else is the 'dumb' method specified by the user.
I think also the purely automated methods are not necessarily incented to provide better targeting for the ad buyer. Their incentive is to spend all the budget and to try to get the user to refill the budget. For large sectors of this spending, there isn't that much sensitivity to waste so long as the people responsible for wasting the money don't lose their jobs. You wouldn't believe it just as an ordinary smart person but people do waste billions of dollars a year on this stuff. The 'smarts' of the system is dedicated to parting fools from their money repeatedly while giving them enough justification with 'greedy' attribution metrics to keep filling up the budget.
>No one thinks that showing the same ad to the same person 100 times is effective.
That depends on what you consider "effective" advertising. Keeping a brand in your (and others') face is a key goal of advertising.
In fact, two (of many) of those goals for advertisers are "top-of-mind awareness"[0]:
In marketing, "top-of-mind awareness" refers to a
brand or specific product being first in
customers' minds when thinking of a particular
industry or category.
and "unaided awareness"[1]:
Brand recall is also known as unaided recall or
spontaneous recall and refers to the ability of
the consumer to correctly generate a brand from
memory when prompted by a product category.
That's not how it works. You are supposed to cycle creative to prevent people from going 'ad blind.' People tune out creative they have seen too many times. Any book on advertising, any of the best practices documentation provided by Google, FB, or other ad sellers, all of them will support the practice of cycling creative, even if the variations are minor. Think of Budweiser cycling the 'Wazzap' joke in the 90s/2000s through variations on the commercial.
It's orthogonal to mine, however. I noted the importance of brand awareness, not specific ad awareness. There's miles of difference between those and nothing I said isn't consistent with "cycling creative."
> I guess I do assume they're doing both, but the extreme degree to which they lean on repeat targeting, to the point where it's this noticeable, and to the point where repeat targeting overwhelms other advertisers to this degree, still means that they're incredibly bad at expanding their market.
> TLDR I assume they're trying to do both, but the proportion of repeat advertising indicates to me that it's giving them a lot better results than more sophisticated methods
I don't think you can conclude anything about how they are "leaning" or what "proportion" of their ad budget is being spent on different things, if the only evidence you have is the portion of ads shown to you.
The fact that their "post-sales targeting" can overwhelm other advertisers who are just doing "market expansion" targeting doesn't tell you much at all, because the number of people who would be eligible for post-sales targeting would obviously be vastly smaller than the number of people eligible for market expansion targeting. It seems very reasonable that the company you just spent hundreds of dollars at would be willing to bid more for an ad on your timeline than the thousands of other companies who want to show you an ad about something you've never heard of.
> The fact that their "post-sales targeting" can overwhelm other advertisers who are just doing "market expansion" targeting doesn't tell you much at all
It shows me that the simplistic method of "advertise to people who have already bought our product" performs vastly better than any of the other sophisticated targeting that advertisers like to tout.
To clarify -- I'm not talking about the proportion of an individual company's budget -- I'm talking about the proportion of the overall market for that product. Every time that someone is shown an ad, that ad is the result of a bidding war. If a large proportion of those ads are retargeted that means the market has decided that the vast majority of advertisers are less confident about their targeting systems then they are about a simplistic, vaguely annoying heuristic.
We can make an assumption about the market as a whole because we believe that the market is at least somewhat efficient. If advanced, targeted advertising that used tons of demographic data and personal information was competitive with these naive targeting techniques, then a higher proportion of those ads would outbid retargeted ads on any given page.
> doesn't tell you much at all
I also want to broaden out a little bit here. In the OP's article, the main complaint being made is not that advertising has zero ROI (although the author is critical of the effectiveness as well). The criticism is that the ads provide no benefit to the user, that they're annoying and reductionist, and that they fail to live up to the promises of companies like Facebook that claim targeted ads connect people to new products that they'll love.
Independent of whether or not we can be critical of the effectiveness of ads for companies, the proportion of retargeted ads also tells us that advertisers broadly, as a market, are incapable of delivering on their promises to consumers and are incapable of building a system that is mutually beneficial to both companies and consumers.
See also some of the authors criticism of recommendation algorithms, which fall into the same boat. Who cares if Netflix's recommendation algorithm increases clicks? Consumers don't sign up for Netflix out of charity, they want a system that serves their interests, not just the company's. For advertisers to try and make the case that access to private data is good for consumers, they need to have something to show that's better than this.
> It shows me that the simplistic method of "advertise to people who have already bought our product" performs vastly better than any of the other sophisticated targeting that advertisers like to tout.
Well, yes, it performs better, but only to people who have already bought the product. It's not surprising that this would perform better than whatever sophisticated ML-based targeting system they use over the general population.
> If a large proportion of those ads are retargeted that means the market has decided that the vast majority of advertisers are less confident about their targeting systems then they are about a simplistic, vaguely annoying heuristic.
Again, this isn't surprising, and I don't think it leads to the conclusion you're making. I think you're getting tripped up because you're not considering the denominator here. There's nothing odd about the fact that a single ad impression shown to a recent purchaser of your product would perform better than a single ad impression shown to someone who has never heard of your product (but perhaps is targeted using something more sophisticated based on their Internet traffic). It's also not odd that the ad publisher would pay a lot more for the former impression than the latter impression (and thus recent purchasers would see a lot of the former type). What's important to note is that the ad publisher is most likely showing orders of magnitude more impressions of the latter type than of the former.
It's not surprising, but it does show that the sophisticated ML-based targeting system is less effective then promised. That doesn't really need to be a shocking conclusion, but it does generally support what the author says in the linked article. We seem to be agreeing on that point, it's not surprising that retargets outperform any sophisticated model, and the only reason it's worth bringing up is that all of this data collection has been sold to consumers based on the idea that it was going to be a wildly effective revolution in how products got sold, and that pervasive data collection was the only way that revolution could work.
I still want to loop back around to the idea that advertising is itself sold to consumers as a social benefit. A non-surprising outcome that shows lots of people terribly targeted ads but manages to increase ROI is still not delivering on that consumer promise, and it's not clear why consumers should be willing to give up their private data to support that kind of a system.
> What's important to note is that the ad publisher is most likely showing orders of magnitude more impressions of the latter type than of the former.
For each individual publisher, sure, I guess... but again, I'm talking about the market overall. If the majority of ads that most people are seeing are retargets, then that says something about the market. The denominator is all of the ads being seen across the entire market, not the ads that each individual company is showing.
I'm having a really hard time following your math. I actually got curious enough about this that I threw together a couple of programmed simulations, and maybe I'm modeling the system wrong, but I can't get the math the work out in your favor for any of them. If the average company is only sending out 10% retargets, then across the entire market the average consumer should only see 10% retargets. There could be a lot of variance, but it can't be a common experience -- it has to eventually even out when you look at the market overall.
In other words, if there are actually orders of magnitude more ads going out to new users who are unfamiliar with the products being advertised, then some users should be getting all of these ads for new products. So if everyone in the room notices retargets to this degree (and in my experience this is an extremely common sentiment among people when you talk to them about ads) -- then where are the orders of magnitude of other impressions going? Who is seeing them? Those ads can't be going into the void, there should be some people jumping into these conversations saying "I rarely get retargets, most of my ads are for new things."
Unless you want to try and make the point that it's all just a Baader–Meinhof phenomenon and we don't have a lot of retargeted ads in the first place. But I think that's a very different argument than "keep the denominator in mind."
On the note of Baader-Meinhof phenomenons, I do want to again keep referring back to a consumer point-of-view in this conversation, because the article is specifically about the consumer experience and whether these ad targeting systems have any benefit for consumers. It's worth noting that a system where consumers only see 10% of their ads as constant retargets for things they already have bought might still be too repetitive for them. I think it's hard to get away from the fact that complaining about retargets is extremely common, so independent of everything else, we know the system is targeting those people too much simply because they notice it and dislike it. It may increase ROI for the company, but (again) the question is why consumers should consent to give up their privacy for systems that seem to be annoying them. ROI is not a good enough reason for me to give up my privacy, this isn't a charity. And you don't need >50% retargets to hit that threshold of annoyance where the system can be judged bad, it just needs to be a big enough percentage that consumers notice it and get irritated about it.
> It's not surprising, but it does show that the sophisticated ML-based targeting system is less effective then promised.
We agree that it isn't surprising, but I don't think this says almost anything at all about how sophisticated or effective the systems are that target the general population who has probably not heard of your product. That would be like walking into a furniture store, and when a salesperson approaches you to attempt to sell you something, you conclude that their ML-targeted online ad campaigns must not be effective. There is no evidence whatsoever that their online ad campaigns aren't effective for targeting the audience they intend to target. You're being targeted by a different type of advertising not because online ads didn't work, but because you happen to be in a tiny cohort of people currently standing inside a store, and therefore can be targeted using a much less sophisticated system.
> I still want to loop back around to the idea that advertising is itself sold to consumers as a social benefit.
I certainly don't believe that advertising is necessarily a social benefit, but that isn't relevant to the point I'm making, which is about the effectiveness of ads from the ad publisher's perspective.
> For each individual publisher, sure, I guess... but again, I'm talking about the market overall. If the majority of ads that most people are seeing are retargets, then that says something about the market.
But you haven't mentioned any evidence or observations about the market overall. The fact that you receive mattress ads after buying a mattress online tells you essentially nothing about the entire market. I suspect it's unlikely that the majority of ads that most people see are retargets (although even if they are, that still doesn't contradict the point I'm making).
> If the average company is only sending out 10% retargets, then across the entire market the average consumer should only see 10% retargets.
The key here is that the distribution of ad impressions shown to each person probably has extremely high variance, and a very high portion of impressions probably cannot be identified with a valuable retargetable online purchase. If you frequently see a large portion of retargets you are probably nowhere near the average.
> Unless you want to try and make the point that it's all just a Baader–Meinhof phenomenon and we don't have a lot of retargeted ads in the first place.
Indeed, I did make that point somewhere in this thread. I also suspect a significant portion of the complaints about retargeted ads are due to frequency illusion. But even if that's not the case, my argument still applies.
> The fact that you receive mattress ads after buying a mattress online tells you essentially nothing about the entire market. I suspect it's unlikely that the majority of ads that most people see are retargets (although even if they are, that still doesn't contradict the point I'm making).
There's two ideas rolled up in here:
One, is like you mention elsewhere, the Baadar-Meinhof phenomenon where you suggest that people don't actually see as many targeted ads as they think. I'm not going to debate that, I don't have the stats to debate it, you could very well be right on that point. Maybe people do just overestimate the number of repeats they see, and the reality is that retargeted ads are a minority of ads online.
The second idea though is more confusing to me: if I buy a mattress and 50% of the ads I see are for that mattress, and I know that in order to get those ads delivered to me a (reasonably) efficient market had to decide that 50% of the time, the ROI on repeated mattress ads were higher than literally any other ad I could have been shown instead (because the mattress ads were willing to outbid them) then that scenario does say that the market believes that the retargeting is more effective than 50% of the other ads I could have been shown.
The difference here is that when I walk into a furniture store, I'm not actively being advertised to by competitors. In order for the retargeted ads to reach me, the company needs to decide that it's willing to outbid the other advertisers currently bidding to sell me an ad. It's not a separate environment with separate rules, the retargeting is happening in the same place as all of the other advertising.
Honestly, I assume that if other companies had the ability to have a salesperson walk up to me when I entered a mattress store and advertise to me on a competing store -- I think they'd probably jump at that opportunity. So I don't think it's a great analogy here, because in the online advertising space they can do that, they can outbid the other company and place their own ads. So we need to ask why they're not doing that more often.
The variance doesn't really change that either (again, unless you're claiming Baadar-Meinhof). Because if the majority of my experience online is actually retargeted and/or naively targeted ads, then what the market is saying is that in the majority of cases, retargeted/naive ads are worth bidding more on than sophisticated ML-targeted ads, regardless of whether there are some occasional periods of variance that tip in the other direction.
What I'm saying is that I'm not convinced the sophisticated ads are performing better than the naive approaches, because from what I can see, the majority of the ads are using the naive approaches. If that is the case, then from that we need to either conclude that the bidding process for ads is not an efficient market, or we need to conclude that sophisticated targeting is rarely more effective than the naive approaches.
Or again... if the argument is that retargeted ads are not a majority, that would also be a reasonable claim to make. But it's got to be one of those 3 things.
----
Separately, I would claim that in a way the ROI and raw numbers matter a lot less than the consumer's subjective experiences, because I don't think the argument about whether privacy-invasive ads are 'good' can be separated from the consumer impact. But I do recognize that's not the point you're making, so I don't want to double down on that and argue at you about something unrelated.
> then that scenario does say that the market believes that the retargeting is more effective than 50% of the other ads I could have been shown.
Yep, and again, that’s because you recently bought a mattress. This is likely not the case for the other 99.99% of people on the internet who have not recently bought a mattress online. Again, you’re mixing up the fact that a particular ad might be more effective for you at a particular time with the conclusion that all retargeting ads must be more effective than all other ads.
> I assume that if other companies had the ability to have a salesperson walk up to me when I entered a mattress store and advertise to me on a competing store -- I think they'd probably jump at that opportunity.
You think that a company that makes flotation devices for dogs would pay a full time salesperson to stand just outside the furniture store to make a pitch to you, given almost no data about your interest in dog life preservers? And likewise for the hundreds of thousands of other products in the world? Clearly that’s not the case. It’s very expensive to hire a salesperson, so you’re only going to see salespeople in places where they are likely to be very effective. If there were hypothetically an auction for the salesperson at that location, its almost certain that the furniture store would bid the highest.
Because after making the purchase, the number of ads for the item INCREASE. I don't mind seeing a handful of ads for something I've already bought but what usually happens is that about 30-50% of ads that I see suddenly become ads for the item I just bought.
> But is that system good? Is it innovative or useful for a recommendation be constantly asking you to re-watch videos from your history? Is that actually fulfilling any of the promises that the recommendation system was sold on? No, not really. I definitely wouldn't call that a 'smart' system.
It seems to work well enough for Netflix. I probably end up watching stuff in the "Watch it Again" list almost as often as I do anything else.
That's what I mean: it works in the sense that it does the strict job it was designed to do. It definitely works, strictly speaking.
But is it working in a way that gives you any value? Are you happy that you're just clicking "Watch it Again", or did you want a system that would actually find new shows for you and let you know about new release that you'd be likely to enjoy?
If you are happy with that, then from an engineering perspective, was it necessary to build a complicated algorithm to show you a "Watch it Again" list, or could we have used a single database query and about a hundred lines of code instead?
There are lots of simple heuristics to make a recommendation engine have better success rates. I can build a pretty good movie recommendation engine by just recommending every single Marvel movie that comes out: they're already popular so on average people will click on them. That's not hard. But that's also not the promise that we were sold when companies started proposing/building these systems into their products.
Specifically regarding Netflix, they show me several recommendation lists, based on stuff I've watched before, e.g. because I watched 30 Rock, they recommend I watch Schitts Creek. They also have "Trending Now," "New Releases," and "Popular on Netflix," but I'm not sure how personalized those recommendations are. There's also a "Watch Something" feature, where they show you things they think you might want to watch, but just start playing the episode and make you reject the selection if you don't want to watch it.
But, like I said, almost as often as I end up picking from any of those, I end up picking from stuff I've previously watched, which I'm not sure actually is anything more than a database query and ~100 LoC.
Given the state of Netflix's catalog, I think these recommendations are okay. If I want better recommendations, I can always go through and "like" some more things that I enjoy. I do wish they had better discoverability beyond these preconstructed lists. They definitely don't make it easy to search.
But, here's another question back: why do they need a great recommendation engine? All they really want is for me to not cancel my subscription. And, all they have to do to accomplish that is show me that there are things I might want to watch that I probably haven't seen yet, which I think they accomplish fairly well.
That is a great question. For me personally, that's not a great recommendation engine, I feel like that could be replaced by a better history view. But maybe that's just me, if you like that system, then I guess it's working for you.
However, in asking that question, you've now circled right around to making the article's point again:
> My complaint is that none of the above had anything to do with hoarding my personal information.
> Amazon shows a box that suggests I might want to re-buy certain kinds of consumable products that I've bought in the past. This is useful too, and requires no profiling other than remembering the transactions we've had with each other in the past, which they kinda have to do anyway. And again, everybody wins.
So, your question is good. Why do they need a super-detailed recommendation engine if what they have works? If it genuinely does work for you, then I guess they don't! And the obvious follow-up question is, why do they need to track anything beyond what movies you've watched in order to make that system work? Why does Netflix need to obsessively track when you pause, which scenes you skip, how long you mouse over a movie recommendation -- if ultimately, the best recommendation engine they have, and the one you're happiest with has nothing to do with that information?
From the article again:
> You tracked me everywhere I go, logging it forever, begging for someone to steal your database, desperately fearing that some new EU privacy regulation might destroy your business... for this?
It's easier to set up remarketing than it is to manage the remarketing ads to put people who have purchased on a blacklist or to put a frequency cap on the ads. Lots of times companies just turn it on without configuring anything, making more money for the platform and wasting spending on showing the same ad to the same user 800 times.
I don't know, maybe if the product is subpar to begin with additional "reminders" help? If I buy something and I like it, or especially really like it, I usually tell everyone I know who I think might like it or benefit from it as well - for free. And if I like it I usually don't have to be reminded of it. It's already in the "good stuff" section of my brain. I've already been sold. I really don't see ads as helpful for things you've already been sold on and use. At that point, at least for me and I assume the poster you're responding to, the opposite effect starts happening and it taints the product negatively, even if you like it.
Like, they are normally my top ad in Facebook, and have been since well before I bought it.
And I did buy it, so score 1 for them. And then i recommended it to everyone I know who likes technical PDF's (which was a lot of people, it turns out).
But now, it's just wasted inventory (for FB), and a minor annoyance (for me), along with a bunch of money (for Remarkable).
Dude, come on: after I've bought a new dinner table, I am not going to buy another one just because. Or buy it for my friend. Seriously. It's similar to how people don't generally go around recommending OSes to their friends and relatives.
BMW (or maybe a design/ad agency at their behest I think) once said that the BMW car adverts are not for people who don't yet own a BMW, they are to reassure the people who have just bought a BMW that they have made the right choice.
In reality I suspect there is a whole spectrum of audience coverage in the results from those who don't have one and never thought about getting one before seeing the ad, to encouraging existing owners to promote the brand to others, but I thought it was a poignant point at the time.
3% is a good conversion rate for a selling page. So 97%+ of people who looked at the buying page were interested but didn't buy. Getting those people to come back is a major target and most companies don't care about the 3% loss in re-targeting those who actually bought.
So re-advertising to people who already bought is not evidence that they aren't "evil-overlord". Evil overlords only care about what benefits them.
I bought the cheaper Kindle with the ads enabled. They were supposed to be targeted ads, but were usually for feminine hygiene products. I have no idea how any reasonable targeting algorithm would have decided to pitch that stuff to me.
I also enabled for a time context sensitive Amazon ads on my programming web site. What did I get? Endless ads for the Batman movie. Google would do slightly better, they'd show the same ad for a C++ training course over and over and over and over.
I abandoned both of them.
I also have a site on the American Revolution. Google's context sensitive ads showed ads for travel agencies.
Context sensitive ads are so pathetically bad. I can't recall ever seeing one that I even wanted to click on, let alone buy.
There is a simple fix, that nobody does. Allow me, the website designer, to list the categories for the ads. I know my audience far better than Google/Amazon, why don't they allow a little human guidance? Geez, maybe I should patent this idea.
> It's almost confidence-inspiring. Like how evil-overlord can this be if they can't even get this basic stuff right?
In the same vein, consider that ad agencies have spent millions of man-hours and many billions of dollars essentially trying to control your brain, and this (the status quo) is the best they've come up with. All of that effort, and they can't even get me to buy a 12-pack of Coke when I shop for groceries, even though I like Coke!
Meta-take on it: Maybe you as the consumer aren't their target. Maybe it's the businesses that they've successfully "tricked" into thinking that advertising works.
For that to work, they need lots of "metrics", "knobs and dials", graphs, "conversion funnels" and a myriad of other doo-dads that can be used to give the impression that they're doing something useful. Hence the focus on tracking, I'd argue.
I get the same thing, but for the engagement ring I bought my (now) wife. I feel like this signals one of two things: either the advertisers are incompetent (most likely), or they have very low confidence in my marriage (less likely, but funnier).
Checkerboard Nightmare seems to have vanished off the face of the internet, despite the author maintaining an active presence, but this gem lives on in random corners.
The idea that you should upgrade your "wedding set" periodically is, in fact, fairly common. Which is pretty sharply at-odds with diamonds being forever, but then none of this makes much sense to me, so what do I know.
Diamonds being forever only means that you shouldn't sell yours. Can't have a flourishing market for used diamonds if we want to sell many overpriced new ones, can we?
Or if you shop for a ring on two sites, but purchase from one, the other site doesn’t get data about your purchase on the other site. So the site you shopped at but didn’t buy from doesn’t know you made a purchase.
Could also be that if you only purchase 1 item from a store you might forget the stores name, but referrals for large purchases are likely a big driver of their business so they want to make sure you do remember where you got this from in the hopes you can refer one person to make a purchase.
A multi-thousand dollar purchase is likely worth it for an extra $15 in targeted ads to somebody.
Like how evil-overlord can this be if they can't even get this basic stuff right?
The problem is not that the data is being used for ads, but that it exists at all. Google isn't what makes it scary that Google collects data. The scary part is hackers, state governments, and malicious authorized Google personnel.
>how evil-overlord can this be if they can't even get this basic stuff right?
The same style of speculation works the other way too. If the advertisee thinks the advertiser is incompetent then they will lower their guard and be more influenced by ads. Perhaps they are maximum evil-overlord and projecting incompetence is a minimum-energy state.
No - Remarkable has a lot of control over how Google and Facebook target you. Maybe they could offer better tools, but those tools already let Remarkable remove you from audience lists after purchase, if they chose to do so.
I mean, to be fair, maybe they aren't using those tools to remove you individually and that's a good thing.
FWIW I get tons of Remarkable ads as well and I don't own a Remarkable, so it just means they're NOT differentiating between you and me. If they did differentiate on the basis that you have a Remarkable then that is a step in the direction of violating privacy.
I don't really know what Remarkable is. I might google them now just to find out. I don't think I am on their audience list yet, because I haven't seen many ads. But if they ever add me to an audience list based on some action I take online, I would not view it as more/less egregious for them to then remove me after I am a customer. In fact, if I were to voluntarily associate myself as a customer, I would expect a more customized experience, depending on the nature of their product, whatever it is.
I googled (incognito) and see that I do remember them now. Surprised I am not in their audience list as I have read several articles about them and gone to their website a few times, in addition to having researched similar products.
On the contrary: Actually I guess they get it absolutely right. This way, they address buyer's remorse. To ensure you really like the product and spread the word, don't return it, and ensure that after few years time you do buy the followup product.
Agree. I have never seen a “targeted” ad that actually hit bullseye. Showing me random ads would probably be more efficient (hitting 0.1% of the time instead of 0% of the time).
Pretty unrelated but.. if you have a remarkable, check out "rmfuse". It lets you mount your remarkable cloud as a folder and access all the documents and so on.
You’re forgetting the incentives here, of course it’s in the ad companies interest to keep showing you ads even if they can discern a signal that you’ve probably already made the purchase…
This is completely missing the point of why one should fight surveillance. Surveillance harms journalism and activism, making the government too powerful and not accountable. If only activists and journalists will try to have the privacy, it will be much easier to target them. Everyone should have privacy to protect them. It’s sort of like freedom of speech is necessary not just for journalists, but for everyone, even if you have nothing to say.
Both discussing theorems for how advertising actually works, and why targeted advertising is counter-productive (for medium-to-large brands, anyway).
The basic premise is that advertising works by seeding common-knowledge within a society. It's important not just that I know that Nike makes cool sneakers, but also that I know that my peers know that. Otherwise, how can I be sure that my idea of cool sneakers is the same as everyone else's?
Culture exists in the interactions between people; for brands, hyper-targeted ads risk focusing on individuals and not cultural influence.
Targeted ads have been useful to me about 1 time in the last 5 years. Being generous, maybe as many as 5 times. The other 99999 ads I've had to endure have either been for something I just do not care about, or more frustratingly for something I already own or currently pay for.
If targeted ads worked at least 25% of the time, I might even appreciate them. But they are virtually worthless. People paying to run ads are suckers, because there's no way their ads get much real, useful business. Social marketing and other approaches just must be more effective.
Targeted ads isn't about targeting people who are very likely to buy the product, they just target people who are slightly more likely than average to buy the product.
Take the classical example "I just bought a vacuum cleaner, why do I get lots of vacuum cleaner ads?", well the reason is that people who just bought a vacuum cleaner sometimes are unhappy with the purchase and buy another one. The probability isn't high, but still higher than the probability that a random person would want to buy one. It might still feel like just noise to the person seeing the ads, but to the advertiser getting 20% higher conversion rate is huge (or whatever number they are getting).
Have you considered that the targeted ads might have worked on you without you realizing it? You might have clicked on an ad and it was such a non event for you that it never made it into your long term memory.
I very much doubt that I have fallen for a ruse like this.
However, with all the cookie and newsletter popups, it has happened that an object moved on screen as I was clicking, and I ended up clicking some stupid ad. I don't count that since it wasn't my intention nor my mistake.
I can't help but feel that the whole advertisement and "ad targeting" business is built upon quite a few lies and a dose of misinformation to make the whole thing seem way more effective than it is.
Targeted ads have been very useful to me, but I do game the system a bit. When I start seeing targeted ads for irrelevant things, I just do some searches for things I like to look at, and shortly thereafter, all of the ads I get are for things I like to look at. It's a few minutes of effort once every couple of weeks, but I feel the results are a lot less intrusive.
This has been our experience running an ad network that only does content-based targeting (https://www.ethicalads.io/). As noted in the article, simple heuristics and topics are plenty to target ads, and provide great value for both advertisers and publishers. We're also cutting out all the ad tech middlemen who take a cut, keeping the value in the developer ecosystem, and keeping the web more private & secure.
We basically let someone like Sentry or Twilio target JS developers with a specific ad about their JS integrations. They can link to JS-specific landing pages with sample code, etc. It works super well, and doesn't require anything beyond knowing the content of a page.
Given that "simple heuristics and topics are plenty to target ads", is it correct to assume that your payout/cost is similar for publishers/advertisers compared to networks that target users?
"Embarrassingly, the trackers themselves don't even need to cause a slowdown, but they always do, because their developers are invariably idiots who each need to load thousands of lines of javascript to do what could be done in two"
Data that is incorrectly analyzed much of the time is still a privacy threat, and sometimes a bigger privacy threat. It's bad enough if your purchases are correlated with being a terrorist, but having your purchases sometimes incorrectly correlated with being a terrorist is worse.
The thesis of the article might be true, but this is a reason to be more careful with private information, not less. If companies are sloppy in analyzing gathered data and do not value it enough, they might not very good at safeguarding it from bad actors who will use it to their advantage more “efficiently”.
> it will recommend you interview people with male, white-sounding names, because it turns out that's what your HR department already does
His linked article (https://www.reuters.com/article/us-amazon-com-jobs-automatio...) says nothing about white or white-sounding names. And I sincerely doubt that anybody's HR department is screening out non-white sounding names for technical roles. Certainly nowhere I've ever worked - I can't remember the last time I saw a resume that wasn't Indian, male or female.
>We perform a field experiment to measure racial discrimination in the labor market. We respond with fictitious resumes to help-wanted ads in Boston and Chicago newspapers. To manipulate perception of race, each resume is assigned either a very African American sounding name or a very White sounding name. The results show significant discrimination against African-American names: White names receive 50 percent more callbacks for interviews. We also find that race affects the benefits of a better resume. For White names, a higher quality resume elicits 30 percent more callbacks whereas for African Americans, it elicits a far smaller increase.[1]
>researchers created resumes for black and Asian applicants and sent them out for 1,600 entry-level jobs posted on job search websites in 16 metropolitan sections of the United States. Some of the resumes included information that clearly pointed out the applicants’ minority status, while others were whitened, or scrubbed of racial clues…Twenty-five percent of black candidates received callbacks from their whitened resumes, while only 10 percent got calls when they left ethnic details intact. Among Asians, 21 percent got calls if they used whitened resumes, whereas only 11.5 percent heard back if they sent resumes with racial references[such as name changes, removing membership in organizations, scholarships, or achievements that might reveal race]. [2]
>Researchers from the University of California, Berkeley and the University of Chicago sent 83,000 fictitious applications for entry-level job postings to 108 Fortune 500 employers, using randomly assigned and racially distinctive names. They found that distinctively Black names on applications with reduced the likelihood of hearing back from an employer by 2.1 percentage points relative to distinctively White names. But differences in contact rates varied substantially across firms. About 20% of the companies were responsible for roughly half of the discriminatory behavior in the experiment. [3]
> See, the problem is there's almost no way to know if you're right.
It turns out that this doesn't matter a lot of the time. You only notice because the advertising to you is bad - but once you can notice the money has already changed hands.
I've worked with people who work on what is variously called "data zombies" or "data shadows" or "data doubles" and the problem is that: they often get it wrong and they do not care and pretend it's correct. This is doubly-concerning in the security realm (i.e. the NSA is only supposed to read data streams of non-Americans so they have a lot of incentive to produce classifiers that guess that things are not from the US).
My Facebook ads are well targeted: I've bought tickets to a book signing for my daughter's favorite author, and awesome map, a couch (!!), and a great pair of underwear off fb ads. I'm sure there's other stuff I'm forgetting.
You could imagine an alternate world where FB has total information about my online activity, and the ads are extremely useful because they know what I want before I do.
I think it would help dispel the suspicion that the Facebook ad targeting algorithm itself wrote this, if you added a little disclaimer that you work(ed) there.
Fair enough: I did work there, though never on ads, and it's been a few years.
To be clear, I realize the privacy tradeoffs are probably unacceptable and I'm not seriously proposing the FB panopticon. But I do think it's an interesting thought experiment to consider how good this could be without constraints.
I briefly hit a period right around when I bought a home that the recommended ads on instagram were really really good. I bought a bed and a sofa from instagram recommendations, and looked at a whole bunch of other things that I didn't end up buying.
Nowadays it's back to being garbage, but I have to imagine every now and then these algorithms hit a gold mine like they did with me.
> Let's be clear: the best targeted ads I will ever see are the ones I get from a search engine when it serves an ad for exactly the thing I was searching for. Everybody wins: I find what I wanted, the vendor helps me buy their thing, and the search engine gets paid for connecting us. I don't know anybody who complains about this sort of ad. It's a good ad.
Not the point of the article, but it's typically the vendors who complain. Other vendors can and do buy up ad space on their keywords, pushing the thing you were searching for further down the page. Search for Miele vacuums, get an ad for Dyson. Now Miele has to buy up ad space for the keyword "miele", which it worked hard to get name recognition for. Some people think this is a bit like extortion, but really it's just the cost of people being able to find your product in 0.00045s. The search engine is just charging the vendor for the service.
This article is wrong to assume targeted advertising requires complex machine learning algorithms. Here are some personal examples where my ads have been poorly targeted:
- I'm actively looking to buy a house for more than a million dollars, and I'm signed into my real estate app using my Google email address. Meanwhile YouTube is showing me ads for Candy Crush Saga.
- A week ago I almost bought an iMac on Apple's website but bailed at the credit card payment screen. I had logged into my Apple account with my Google email address, yet since then I've not seen one ad to suggest I should buy that iMac (although I do see Apple ads for iPhone on Twitter)
- Twitter kept showing me Disney Plus ads even after I signed up
- In 2019 on holiday in Bali I saw YouTube ads in Indonesian, despite being logged in and being unable to speak that language
Your purchase history is probably the largest, most useful, and most difficult-to-hide signal.
It can be directly accessed through credit and debit payments, as well as contactless payment mechanisms. It's increasingly possible through facial and other biometrics data. It is specifically associated with spending patterns.
Travel data are available through licence plate scanners, tollway tokens, and transit passes, as well as, again, biometrics.
Anything online leaves a very fat and wide trail, of course.
Your employer's payroll processor would be another data channel. That's going to be available to other prospective employers, increasingly.
Utilities data might also be available.
Auto repair and smog-station data are collected and reported, and often sold to insurance companies for rating purposes. (Distance driven correlates strongly to risk.)
I think that you covered most bases already. I'd add these: pay with cash, use throwaway emails for everything, scramble anything that can be used to fingerprint you, give false information unless you have a reason not to, move to a cabin in the woods, speak in code, wear a wig.
The only people good at targeting are the ad platforms, and they don't use an algorithm. What they do is slice up an audience into interesting enough sounding chunks in order to attract ad buyers, who are their actual customers. There's no reason to think, however, that the ad buyers would be particularly good at figuring out the audience for their product, or that a particular product necessarily even has an audience. So you would expect the vast majority of ads to be bad.
This is incredibly wrong. Google and Facebook are primarily algo driven ad platforms now, with all the best targeting being algorithmically derived. Large ad accounts on FB pretty much never use the "interesting sounding chunks" you're referring to anymore.
The only people good at targeting are the ad platforms
Honestly, I don't see it. The ad tech industry keeps telling everyone that there are plenty of people out there who see advertisements that matter to them, and they're happier because of it. But my experience has been the opposite.
Ever since "ad tech" became a thing, and online advertising started going after people and not context, it just seems to have gotten worse.
Just in my open tabs right now:
An ad for a food product I cannot consume for medical reasons.
An ad for a local pizza joint in a city a thousand miles away.
An ad for something to use on body parts that I don't possess because of my gender.
Advertising was a solved problem. But as is often the case, a small group of people within the SV bubble decided to reinvent the wheel, and did so badly.
Ad Targeting, broadly fits into these categories (with limited and over simplified descriptions):
Geographic - Where are you viewing the ad from?
Contextual - What is the content surrounding the ad?
Demographic - What do we know about you based on context? (sites have demographics)
Behavioral - What did you buy in the past?
Appliance - what you are using to view the ad be it a toaster or Chrome on Windows?
There are all sorts of possible reasons, to your observed targeting failures.
> An ad for a food product I cannot consume for medical reasons.
How would an ad company know that?
> An ad for a local pizza joint in a city a thousand miles away.
This is a geographic targeting failure, unless - you are on a VPN where the IP is misrepresented or you have purchased from the location before.
> An ad for something to use on body parts that I don't possess because of my gender.
This could be contextual or simply dumping impressions to meet a campaign run expiry.
> a small group of people within the SV bubble decided to reinvent the wheel, and did so badly.
Nope. So you are disappointed the heuristics don't have more concrete data about you? It's the same as it always was. Smaller and new(er) companies have different priorities for what to focus on, but it's all the same tech.
If the ad tech industry was as great as it tells the advertisers, none of these thing would happen.
And, no. I'm not on a VPN. Even the most rudimentary what-is-my-ip sites correctly display my location. But I bet some ad tech company told that pizza joint its ads would be targeted. And probably charged extra for it.
If that's what you want to see, you'll see it. This has no effect on how it's done.
> If the ad tech industry was as great as it tells the advertisers, none of these thing would happen.
How is that related to technical capabilities?
> I'm not on a VPN. Even the most rudimentary what-is-my-ip sites correctly display my location.
A real-time lookup is not timely. You generally want to have it pre-computed in a cache. If someone so much as spoofed your IP, it will poison your location. God forbid you leave a logged in browser session anywhere else. Computers are dumb, but you expect more and blame the adtech companies, I get it.
This is no more than Sturgeon's law (90% of everything is crap).
Beyond that no, you cant do GPT-3 or Yolo or even MNIST[1][2] with a dumb heuristic.
> almost everything produced by ML could have been produced, more cheaply, using a very dumb heuristic you coded up by hand, because mostly the ML is trained by feeding it examples of what humans did while following a very dumb heuristic
Once I was searching for a surprise vacation for family. After buying it, ads began to pop not only in my notebook but at any device at home, no matter who was using wich device.
Says the influential politician being blackmailed for something that happened decades ago. Privacy is the only way to have a civilised society with a functional democracy
i think your camera photos seem to be the hottest cake.to get cool ads move your cool images(downloaded or created) to the camera folder in your.AI will "think" you are visiting "these cool places" and "snapping lambos,jets,rubber banded cash" like some really rich person. suddenly i bet you will start seeing the rich traveler's ads. not your damn activity mirror
Ad targeting for stuff that I've just bought. Worst offender: Remarkable - really good product, love mine, but I'm still seeing their ads everywhere. I'm not going to buy a second one.
It's almost confidence-inspiring. Like how evil-overlord can this be if they can't even get this basic stuff right?