Having worked with journalists, these sound like typical entitlement complaints. Frankly, a lot of writers have an attitude that they're "artists" who shouldn't be rushed and don't have performance requirements.
Frankly, it does not sound very hard at all. They have to write 20 posts a day, but each post is only a headline and a brief summary. A focused writer can finish that in 15 minutes.
> We had to write in the most passive tense possible. That’s why you’d see headlines that appear in an alien-esque, passive language.
Oh no! How dare Facebook strive to be neutral and passive.
> After doing a tour in Facebook’s news trenches, almost all of them came to believe that they were there not to work, but to serve as training modules for Facebook’s algorithm
Of course they were. Facebook is quite explicitly trying to apply ML throughout their site. Why should they permanently be in the business of employing writers to do something which computers could do reliably and effectively?
> We had to write in the most passive tense possible. That’s why you’d see headlines that appear in an alien-esque, passive language.
I was just thinking that the Facebook summary headlines are surprisingly journalistic compared to Buzzfeed, Upworthy, etc. Seeing somebody complain about that explains why their title is "news curator" and not "journalist."
After having become so annoyed by the buzzfeedization of even traditional news, I was positively surprised when I discovered the "Current Events" page of wikipedia [1], which contains per-day lists of actually relevant stuff (ok, I would take out the crime and sports sections, but that's just me). I would love more neutral, concise media like that.
Oh, I would say it's a very "journalistic" complaint, unfortunately. Apparently, there is a fair market for that (given the popularity of sites like newyorker.com on HN), but in general what some journalist wants to write is almost opposite to what I want to read. I want to know news, meaning as short and pointed messages as possible, and "no news today" would be actually a good thing. A journalist who thinks he is "good" (note the "graduated from Ivy League" part) wants presence, wants to be someone, a (or, preferably, "the") trendmaker, the Lord of opinions, like that Spider Jerusalem guy. His ego gets in his way. Which is pretty natural, actually — not only for journalists, but for everyone who wants to be a Person, an Individual, not just a smaller part of the system. Which is actually good, I think, for a human.
It might be somewhat working with printed media and specialized news agencies — after all, agency wants something close to what the journalist wants: to have presence, for public to pay attention to them, not to some other agency. "No news today" would be a disaster for them.
I mean, it's all pretty natural, if we admit that typical news agency's goal is not to "give information". And Facebook's "trending news" section's goal, on the contrary, is quite close to that.
I think that's an unfair dig at Spider Jerusalem. He's a decent (fictional) journalist in my opinion. Trying to uncover truth and stuff, old school.
More on topic though, the comments section of the article is also full of "booohoo they have it sooo tough" comments but I think work conditions are relative. It's the typical job agency treatment only a couple of levels higher. If everyone around you is living the Facebook life and you're curating news in a basement/conference room...sure that's not horrible but it's still fairly indecent treatment.
Disclaimer: Huge Transmetropolitan fan, programmer not journalist :)
The article also attempts to paint them as overworked and abused and treated differently because they were contractors. Having worked in the big tech industry as a contractor, their description of contractor treatment and workload is pretty normal (no employee perks, quotas, etc.). This is due more to Vizcaino vs. Microsoft [1] than any sort of abusive contractor agenda.
I shall just leave a link to Douglas Rushkoff here https://www.youtube.com/watch?v=87TSoqnZass to explain to you the deep flaws in the "default" techie thinking that your comment is a classic example off.
I stopped at "spiritual deeply humanistic recharge." There's no way I'm watching 90 minutes of BS.
Also, in case it wasn't clear, I have in fact worked at several media companies directly with journalists. I'm not speculating from some distant position about the attitudes of journalists.
If you have an actual point, I'd love to hear it. I don't see how writing basic summaries of the news is a valuable or artistic endeavor.
"After doing a tour in Facebook’s news trenches, almost all of them came to believe that they were there not to work, but to serve as training modules for Facebook’s algorithm." ha! aren't we all.
So.... They finally know what it feels like to be a blue collar worker whose job gets outsourced and sometimes are offered benefits contingent on training their offshore trainees.
Everyone was happy to have $30 jeans which could last one year over their $70 jeans made stateside which could last five years or more... But now that it affects them, oh my, this is bad now...
> Everyone was happy to have $30 jeans which could last one year over their $70 jeans made stateside which could last five years or more...
My only piece of designer clothes was a $160 pair of jeans that lasted for only 3 or so years. I concluded that from now on I should stick with buying $50 jeans which last me at least 2 years.
I see this as a fundamental problem of the unsustainability of our economy. The incentive of every corporation is to earn an increasing amount of money, and thus to move more things from resources to garbage as quickly as possible. "But all those jobs!!" is a stupid argument. Jobs aren't the goal. The goal should be getting resources to the right people, and sustainability for the long term.
I don't buy this planned obsolescence bs. I get that most markets aren't very free, but they're free enough to ensure that superior products generally win out. Companies are trying to make money today. They aren't thinking: "hey, let's put out a crappy product that will be worse than our competitors' so that people will have to buy more of our stuff in the future!"
Any company that did that wouldn't make it to tomorrow.
>they're free enough to ensure that superior products generally win out.
Doesn't this imply that the customer has enough information to determine which is the superior product? In reality, it's quite easy to dress up an inferior product with lots of marketing, and gain great success in the marketplace. Heck, look at Beats headphones. They're not superior in any way. Yet, by carefully crafting a marketing message, and putting massive amounts of money behind it, they've managed to get a fairly large share of the market.
The problem is that the consumer usually has no way to compare quality easily, and so all they have to go on is sticker price. As a result the incentive is for companies to reduce sticker price at the expense of quality.
If you ask a consumer whether they want a $500 product that lasts 10 years or a $200 product that lasts 2 years, they'll happily pay extra for the higher quality product. But that's not usually the choice they're presented with.
I think a solution to planned obsolescence would be to mandate prominent labelling of lifetime guarantees.
>The problem is that the consumer usually has no way to compare quality easily
That used to be true and is an example of information asymmetry in markets - classic example being the used car market [0]. It is much less so now, when was the last time you bought anything substantial without reading a review of it or checking the customer ratings?
>If you ask a consumer whether they want a $500 product that lasts 10 years or a $200 product that lasts 2 years, they'll happily pay extra for the higher quality product.
That's not necessarily true - value decisions are more complex than that. Depending on the market, those two price points may be in different segments based on (e.g.) disposable income. Sure, for Alice, who has $500 disposable income available for a particular purchase, this might be a no brainer, but for Bob with only $200, not so much. See also Vimes' boots theory of socioeconomic unfairness [1], hyperbolic discounting [2]
aninhumer pointed out the main flaw in this line of reasoning but it's not just the lack of long term data. Many companies change models frequently or make quiet design changes – e.g. the blender given a good reliability review changes a couple of years in to replace metal gears with plastic, which requires a teardown to see. If you go to Costco, Sam's Club, etc. count how many things have slightly different specs and model numbers than the ones you can find reviews for, etc. The last time we bought a washing machine, only one of the models which Consumer Reports had recommended a little over a year before was still offered for sale in our region – everything else had changed and most online reviews were just “we got it last week and it's great” comments which don't tell you anything about long-term experience.
In some cases, you can simply conclude that an entire brand is either unreliable or trustworthy (e.g. Apple doesn't sell Walmart edition devices which break within 18 months) but in most cases you have to do a lot more research to know what level you're getting.
> It is much less so now, when was the last time you bought anything substantial without reading a review of it or checking the customer ratings?
Besides the already mentioned point of reviews being made too soon to be able to reflect on product lifetime, keep in mind that easily available (e.g. on the shop site) on-line reviews are mostly bullshit anyway. It's mostly a mix of people being affiliated with producers or paid to endorse a product, with an occasional person which is too dumb to operate something correctly but quick to drop a 1-star rating with a nonsense explanation. To get any sensible reviews, you actually have to know some trustworthy reviewers - be it a site you follow or a friend you know. Either way, maintaining sources of trusted reviews requires some effort, so you won't be doing that for most of the product you buy anyway.
>when was the last time you bought anything substantial without reading a review of it or checking the customer ratings?
But reviews don't tend to judge product lifetime that well, because they're usually written within the first month a product comes out.
I think the only way you can get reliable information on lifetime is by forcing companies to provide it, and guarantee it. (Or put a big sticker on their product saying "0 years guaranteed")
>Depending on the market, those two price points may be in different segments based on (e.g.) disposable income.
I feel like I see a lot of reasonable sounding financing in the markets I'm thinking of, but perhaps I'm remembering times before 2008.
> They aren't thinking: "hey, let's put out a crappy product that will be worse than our competitors' so that people will have to buy more of our stuff in the future!"
Of course they're doing that. They get away with it, because a) everybody else is doing that too, and b) they know that throwing money at marketing has much better ROI than making a good product. A little bullshit can paper over many faults.
"Superior" is a subjective term. Superior in what sense?
Trump for example got tons of coverage. Is he "superior" to someone who may be more knowledgeable, skilled and would "make a better president" but got less coverage? The amount of advertisement, exposure, and just plain psychological manipulation matters.
Is sugary soda superior to healthy drinks? Is bottled water superior to tap water? So we as a society really need to throw away metric tons of non biogradable materials every minute?
Thinking that something wins out because it is "superior" is just confirming the free market ideology - whatever wins must be "superior" is a tautology which makes the word "superior" lose all meaning.
Yes, lots of crappy products beat products that performed better in practice, because of external and network effects.
But what we should be asking is about the correlation between a company's incentives and polluting the planet. In the main, they are pretty aligned, which is scary.
With regard to clothing, "fashion" has, unfortunately, trickled down to everyone. One can buy clothing which lasts, or boots which last. You can buy a 400 pair of boots which will take a beating for up to a decade of use, or you can buy a pair for $100 which will last you a season. Lots of people will balk at the 400 boots. Same with clothing. They'll buy the stuff made with short cotton fiber which begins fraying after a couple of washes --or you can buy the long fiber textiles which cost a lot more. Or wool... or white goods or pots and pans... etc.
Anyway, most people have adopted the fashion cycle --something that only wealthy people would import from the fashion capital du jour.
The problem is that neither me nor most other people know a first thing about quality aspects of materials or manufacturing of clothes/shoes, so we can't tell if the product will last long.
Companies don't merely produce products, they also produce culture. And in fact, that's mostly what they sell. CocaCola isn't going to put their product in white cans that say "sugar drink" on them are they? Nobody would buy it.
Of course. Buy buyers mainly see "cheap jeans!". They don't always think about quality, craftsmanship, etc. They've been primed for yearly changing fashion which enables cheap jeans made from sub par materials.
'Mark Zuckerberg has been transparent about his goal to monopolize digital news distribution. “When news is as fast as everything else on Facebook, people will naturally read a lot more news,”'
Oh lord. That's not news. Those are sensationalist, click baity headlines making us all collectively dumber. It's anecdotal, but I'm quite sure the 'news' my girlfriend has gotten from FB has made her less informed.
I actually wrote an FF plugin to hide that right bar, since I would now and then get sucked in by that crap and waste 15 minutes reading about Lindsey Lohan or whatever.
When the employees of Facebook collectively ask "What responsibility do we have to stop the election of Donald Trump" at an internal meeting, it gives me Orwellian chills. Today, it's not governments we need to fear the most, it's data hungry, fascist internet corporations.
Sometimes people have a hard time decoupling politics from their jobs. They should try how ridiculous it would sound to say "What can we do to stop Bernie Sanders". That should give them pause. As far as I know all the major candidates are operating within the law and thus we ought not try to Stop Them" just because our politics differ.
Or what if instead of national figures FB or anyone else tried to railroad an SF Supe they didn't particularly like because they shot down their proposed development. Or Menlo Park.
Exactly. The problem is, as a private company, should Facebook choose to manipulate it's news feed algorithm to promote one candidate or the other, or for whatever reason, this would be its prerogative under the First Amendment.
I personally choose to not use Facebook specifically for these reasons. Even though I know many Facebook employees personally, I don't trust it's employees to divorce their own interests and their powerful position as Facebook's curators.
Old media usually has had opposition, like there is Fox but there is also MSNBC. Facebook is THE social network, even Twitter is a joke in terms of size against it.
They were also discouraged from mentioning Twitter by name in headlines and summaries, and instead asked to refer to social media in a broader context.
Indeed. It's not like there is the LA facebook or the NY facebook or the Chicago centinal facebook, etc... This is more like a "Global Times", let alone a "US times".
So anything they might do which appears to be editorialisitc in nature regarding politics would be of concern.
Because they're a monopolistic entity with unprecedented control of information flow. Nothing quite like this has ever existed. It's in the public interest that it be regulated precisely because it can control the outcome of elections and other behaviors that are functionally essential in a society.
The Facebook echo chamber is a true effect, and echo chambers cause a feedback loop of encouraging radicalization. If you wanted to, you could always run sentiment analysis over comments containing a political candidate, say Donald Trump, and penalizing in feed ranking comments that were pro-Donald or anti-Hillary, thus dampening the echo chamber. Same thing with simply penalizing links from right wing websites. The echos then start to fade, and then perhaps lower the motivation of someone to vote or donate money or something.
In the end though, you're basically just playing at the margins.
No. Absolutely not. That's not what is happening at all. I know. I worked on FB trending from the algorithmic side.
The types of trending topics that are blacklisted tend to be low quality trends. Say #FollowFriday, or take examples from the current Twitter trends, #mondaymotivation, #MondayMorning, #DressAFilm, or #MondayBlogs.
They also will blacklist a trend when it's a duplicate of another trend say #GoodWifeFarewll vs #TheGoodWife. In fact, they'd probably try to combine those two if the clustering algorithm failed, and then relabel it "The Good Wife" since it undoubtedly corresponds to a FB page.
The manipulation of the candidate set of trends is there because the trending algorithm and the clustering algorithm is too noisy to publicly surface. The trends you actually see are algorithmically chosen based on your behavior.
Quoth the former curator: “Every once in awhile a Red State or conservative news source would have a story. But we would have to go and find the same story from a more neutral outlet that wasn’t as biased.”
Well, there you go. Quite frankly there's a neutral editorial voice that FB wants to project. And so the main article has to be from a legitimate unbiased news source. Brietbart, Newsmax, and the like simply aren't.
Having a list of approved sources to highlight is really common I these types of products, because no one wants Stormfront to be on the front page.
Of the topics don't get out of the ghetto of subculture's echo chamber, it's never going to be approved. Even the topics cited are basically just birther conspiracy du jour.
Even if the topic gets approved with a different main link, everything is being shown on the results page, just not with a gold star next to it in the top slot.
The problem is that "legitimate unbiased news source" is subjective. Unless you only allow direct press agency news - and even here we have often the discussion which words and language is "loaded" - you are define what is neutral and not.
Personally I would regard Breitbart or Newsmax are just on par with HuffPo - for you they probably aren't. That's why pretending to know what others have to perceive as "legitimate unbiased news" is such a delicate issue that should not be done in secrecy.
Non-transparency on blacklisted sites and topics will always lead to distrust and the expectation of one-sided political shilling.
Meh. You have editorial control, and you know what type of voice you want. You enforce the voice. There's no point in trying to satisfy everyone because you can't. For example, Wikipedia was seen as a den of communist atheist propaganda and so Conservapedia was created.
Sure you could publicize your approved list of sites, but then you're just going to attract a bunch of people with an agenda to trying to promote their agenda by claiming some sort of slight by not being included. In the end though, the only honest response to this criticism is, "Screw you. Make your own site."
Topics aren't blacklisted except when they duplicate something else. Most topics simply aren't approved for trending. If you want to know what's not approved, look at Twitter trends, it's full of junk. Honestly, that's the best public example of what the trending topics algorithm detects.
You really need to read Tom Stocky's post about this, because that's exactly how it's run. But you don't really have to, because it's exactly what I told you.
Finally, if you actually click on any of the trends, pretty much everything on that page is algorithmic. Find a trend that mentions, Obama, I'm sure you're going to find a racist post on that list, because there's always scummy people.
PS. For the record, I always feel dirty if I visit HuffPo. Gawker on the other hand, I'll own up to reading that tabloid.
Very much agree, but do you really think it's all that different at the NY Times? I'm sure at FB no one thought they were going to beat Trump by being deceptive or manipulative. No, they would just link to stories they thought were more accurate, had less spin, covered more sides etc. Likewise, I'm sure many NY Times reporters are driven by a feeling of duty to concentrate on issues where they feel they can bring truth/illumination to the campaign and get Trump defeated.
I guess it's plausible to me that a new company like FB lacks some sort of important company culture for reporting that the NY Times has by virtue of its long history, but this seems minor. The real issue is (1) the consolidation in the news industry and (2) its political homogeneity. FB and the NY Times are each examples of both.
There's a few important differences at play I think. First, the NYT has a few million paid subscribers, whereas Facebook is a free service with a billion and a half members. Just like their technical infrastructure, the sheer scale of their platform and reach is mind-boggling, global and borderless - unlike anything the world has ever seen. We may be fooling ourselves by judging Facebook through the lens of old media - there's been a paradigm shift and it needs to be appreciated and judged as such. Second, the NYT is a media company - which generally leans left. They create content. Facebook is more akin to the printing press company that newspaper companies employee (ROUGH analogy) or cable providers for TV. Remember how much shit Verizon (Comcast?) got itself in when it manipulated internet packets to deliver it's own advertising to its subscribers? Didn't they get sued and have to pay millions? I think Facebook manipulating its news feed algorithm should be as egregious as that behavior.
To me the relationship between NYT and FB (or any other content creator and FB) is not very different from that of freelancer content writers and content companies like Demand Media, of eHow and Livestrong fame. The only difference is that content published in FB, either by users or sponsored, gets to keep its branding, whereas content freelancer are ghosts - although most people I know share content in FB with total disregard to the source. It's sad to see how quality content creators like NYT now get to share the wall with unscrupulous ones. To all effects, FB is turning news into a commodity.
There was a recent study looking at this possible issue. They called it Search Engine Manipulation Effect [1]:
"The fifth experiment is especially notable in that it was conducted with eligible voters throughout India in the midst of India’s 2014 Lok Sabha elections just before the final votes were cast. The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation."
What if the employees at Facebook instead asked "What ethical responsibility do we have to ensure that we, as the primary distributor of news in the world, do some minimal amount of fact checking."
The goal shouldn't be to "stop Trump" — that's not what we want a news distributor to do. I don't want bias against a particular person in my news distributors. What I want is for my news distributors to actually have ethical standards that enable them to serve the public good (and as a by product hopefully prevent lying demagogue's from gaining power).
Is it "bias" to use a standardized method or algorithm (even one that involves humans reading natural language") to filter content by accuracy and provide some interface indication not just for phishing, and for porn, but also for falsehoods?
Realistically, this is exactly how they would go about trying to stop Drumpf. Plausible deniability and all that jazz.
In fact, decisions which Facebook makes this election season will impact the result regardless of the intentionality behind those decisions. Even something as mundane as the mix between links and images impacts different candidates disparately.
Realistically, the only way for anyone not to seem like they're going out their way to attack Trump during this election is to completely forget about fact checking, because he lies so repeatedly and shamelessly about things he's already been caught out on: https://www.washingtonpost.com/news/fact-checker/wp/2016/03/...
While this may be a more direct or open admission of a "moral" attempt to shape political outcomes, I think it's worth remembering that every other major corporation has its hand in the pot, albeit in a less direct fashion, when its board pours money into lobbyists, special interest groups, and media outlets. Sometimes for "moral" reasons, sometimes for economic reasons - but every major corporation is a political player.
As I mentioned in my other comment, we may be fooling ourselves by judging Facebook through the lens of old media or old corporations - there's been a paradigm shift and it needs to be appreciated and judged as such. A free, borderless and global communications platform and service used by 1.5 billion people simply doesn't fit into our classic models of "free speech" and "corporate personhood". I'm not sure what the best approach will be to ensuring their neutrality when it comes to their algorithms, but I think that assurance should exist, ideally in a trustless, provable fashion.
I am no fan of anyone that is currently running for President. Assuming that Clinton, Sanders, or Trump ultimately win, Congress and the Senate will have to be our last line of defense in keeping the "winner" from doing serious damage to the country.
That said, I absolutely think that Facebook and other major media outlets should be held to a form of the Equal Time rule:
I would normally disagree on the equal time rule because they are a private company, but Facebook recently built a data center in my town, took all of our municipal water supply, won't pay taxes for several years, and is only creating 4 jobs. And we allowed it because of some weird obsession our town had with having a data center here. Democracy...yeah!!!
Screw 'em. If they want to play the corporate welfare game, they can abide by the equal time rule.
What utter BS. Facebook has spent probably close to twenty million on building its Altoona data center (only one built recently so will assume that is the one in question). It did not take the municipal water supply, in fact this particular data center is one of the greenest in the world. It did get a tax break on its water usage, like most large municipal customers do, and according to news articles had an additional water main run to the site, but it does not have any particular water requirements that would necessitate any magic source. If you want to look for water savings in Altoona start at Adventureland and the amount of water being wasted there on a daily basis.
As for taxes, what taxes do you think were going to be assessed on this property? It was an empty piece of land before FB arrived so it was not like the city was losing much property tax to begin with. In return Facebook is paying salaries for forty or so people on site and the thousand or so who worked for over two years building the site. Oh, and because tech firms like it when others do the research and general site selection you will notice that Microsoft is dumping tens of millions into a data center in West Des Moines as well. You're welcome.
You "allowed" it because it was a good deal for a city like Altoona that otherwise was not much more than an amusement park and the last place to get gas on I-80 before you hit Des Moines.
Can you expand on this a bit? Do you mean to say when you turn on the tap in the kitchen nothing comes out and that you are no longer able to use the shower or flush the toilet?
Changed water source from local aquifer to local river. Apparently the data center takes the water and dumps it back into ground. The sourcing change cost more so citizen water prices went up. Guess what Facebook gets, you got it, free water. Not a joke.
It is crazy how much money we just give to profitable companies.
Sorry to hear that. Can you link to any news, or other, sources about this? I don't use facebook so I'm glad to not be contributing to this, but maybe there are other data centres in your town I do use, who knows.
It was considered a big deal at the time, though you could consider going from the sitting President answering questions on Reddit to shoveling large amounts of money to Facebook to ensure your message gets seen a retrograde step that's happened since.
I'm just amazed at how happy and ignorant people seem to be about giving away Facebook more and more signals about their likes/dislikes
They started the profile badges thing and folks thought that was so cool.
I had to explain to them that this is one more data point for Facebook - if I change my profile badge to a Manchester United logo, it probably means that I really, really care about Manchester United.
I can then be targeted by an advertiser that wants to sell merchandise to Manchester United fans
I agree with you 100%. At the same time, in this specific situation I can't help but still be even more disturbed at the thought of a Trump presidency than I am by a data hungry, Orwellian, fascist Internet corporation. Not defending the Facebook employee(s) who asked that question by any stretch, just marveling at how screwed up our political system is. :(
There's nothing wrong with asking the question. In a free society, it should be perfectly acceptable to do so without consequence. It would be Orwellian if the question were never even asked.
Just think how unresonable sounds to say "What responsability do we have to stop Hitler?
Poe's law or whatever, but I can't even imagine a worst possible candidate than Trump, even Eric Cartman would be a better president than this Daily-mail-esque cartoon of human being.
Downvote all you want, still, it's interesting how no-one took the time to answer the question, if facebook existed back then and had tried to stop Hitler getting to power would that have been Orwellian? It is our duty to always defend democracy even when we strongly presume it will bring destruction and misery?
Trump is a proto-fascist asshole, no doubt about that but when it comes down to it he shared his worst aspects with every major candidate that ran for the GOP nomination this season. The only difference is that Trump didn't play the game how he was supposed to so the media branded him as being particularly evil when he really is embodying the beliefs of the party that he now represents (and that all other candidates paid lip service to in one form or another).
Thoughts about the utility of schools to serve this function? We are already teaching humans very simple to very advanced tasks and we do so millions of times a day. Is the fact that tasks/knowledge are being structured for paced human learning irrelevant for this purpose?
It's not irrelevant, since there is a minor area of ML called 'curriculum learning' which asks how to order examples to teach a ML algorithm most efficiently, and it comes up in some other contexts (for example, a boosting algorithm which focuses on optimizing performance on hard/misclassified cases can be seen as somewhat like curriculum learning, and there are variants of gradient descent which focus on hard examples rather than wasting time on cases where the NN can already get the right answer; and for 'active learning', you want to pick the example which will teach the algorithm the most), but there's not much you can take from known pedagogy at the moment and apply straight to NNs. Even the non-bullshit parts of education like spaced repetition have no clear analogues for tasks like 'train an RNN to write news headlines based an article text using this corpus of human-written headlines'.
In fact, "expert systems" were built around this principle, though generally more in the industrial context.
I don't think that the way knowledge transfer is structured for humans is by default distinct from how we would do it for AI in the long term, but that's a pretty wide range of pedagogy so some tasks make more sense than others for the state of the art today.
In fact the way we train ANN/CNN is similar to how infants & toddlers do early classification. Frank Guerin at the university of Aberdeen has done some study in this area that is in it's early stages.
As far as I know there have not been efforts to set up schools to train AI's from the general population.
The closest thing to it are the efforts by Google and Facebook to train their classifiers (facial recognition, text detection etc...). We are working on an internal process that uses user input to classify objects in the home (furnishings mostly) for auto segmentation and detection.
So, I got interested in this article's description of horrid working conditions and decided to read about it carefully. But I noticed that it gives emotional descriptions of it long before the actual facts. I wouldn't go as far as calling this a manipulation, but it's certainly a disturbing writing style.
3rd paragraph:
> grueling work conditions, humiliating treatment, and a secretive, imperious culture in which they were treated as disposable outsiders.
6th paragraph:
> “It was degrading as a human being,” said another. “We weren’t treated as individuals. We were treated in this robot way.”
And then, finally, on 10th paragraphs, we get a glimpse on the facts:
> they received benefits including limited medical insurance, paid time off after 6 months and transit reimbursement
(BTW, is it usual for contractors to receive such perks?)
> A company happy hour would happen at 8 p.m., and we’d be working
Horrible, inhumane treatment indeed.
> Over time, the work became increasingly demanding, and Facebook’s trending news team started to look more and more like the worst stereotypes of a digital media content farm. Managers gave curators aggressive quotas for how many summaries and headlines to write, and timed how long it took curators to write a post. The general standard was 20 posts a day.
20 posts during 8 hour work day is almost half an hour on one post. It is considered too little? Seriously?
So — apart from all the pretty words, I didn't really see any especially bad treatment. Hell, I'm pretty sure that your average newspaper employees have more nightmare stories.
“It was degrading as a human being,” said another. “We weren’t treated as individuals." hmmm. Trying hard to feel sympathy/emapthy for these victums but my algorithm is throwing an exception.
I don't think the assessment that they were part of a future algorithm is too far off. Given enough expert input picking a fitting image/video or headline should be doable. Worst case you reduce the number of humans needed to one quality assurance person. Algorithm says "this headline, this image"...yes/no;fix.
I wonder that myself. I think it has more to do with standard corporate over thinking about what's skills are actually required for a job. It certainly make it easier to say, "We want someone to write titles and snippets of news stories. Who does that? Journalists. Cool, let's get some of those."
Frankly, it does not sound very hard at all. They have to write 20 posts a day, but each post is only a headline and a brief summary. A focused writer can finish that in 15 minutes.
> We had to write in the most passive tense possible. That’s why you’d see headlines that appear in an alien-esque, passive language.
Oh no! How dare Facebook strive to be neutral and passive.
> After doing a tour in Facebook’s news trenches, almost all of them came to believe that they were there not to work, but to serve as training modules for Facebook’s algorithm
Of course they were. Facebook is quite explicitly trying to apply ML throughout their site. Why should they permanently be in the business of employing writers to do something which computers could do reliably and effectively?