So I work at Meta and just got MetaMorphized s/@fb/@meta/g.
When I started, my personal account was banned twice on the condition of adjusting 2FA. It was sorted out. I had a past account that I could no longer access as email domains changed, so it looked like I was trying to maintain duplicate accounts. Why it triggered on adjusting 2FA didn't make sense. Then, it wanted me to verify my identity in a way that created a Catch-22 of wanting a login from itself or to a device that didn't have a valid login session. I set the options for better security and left it, rolled all of my personal passwords, eliminated public data, canceled unnecessary social media accounts, and enrolled in security features like Google's Advanced Protection Program. I'm hesitant to make changes risking getting locked-out again.
I recently spoke with someone in Readiness who works with hiring human verifier contractors around the world. A primary issue is scale. You would need to hire the entire world's population to moderate the content being produced. Still there will be bias and misinterpretation of sarcasm, and varying standards of acceptability and decorum. AI is a force-multiplier to an extent, but it takes human judgement to rectify mistakes and data to identify brigades, scammers, terrorists, and political manipulators seeking to exploit an imperfect system. Sadly, the humans with the best judgement typically have better career options that they couldn't be paid to do social media moderation. It can be made better, but the ultimate realization is with even great care, good intentions, and attempts at making things sensible and fair, there are always going to be mistakes. It's trying to minimize mistakes and not enable genocides, election sentiment manipulation, or product scams. It's doable to minimize mistakes, but it take persistent vigilance, wisdom about human nature, and creative solutions to deter and prevent harm while avoiding harming innocent persons. Mistakes are bad and disappointing and it feels bad when they happen.
In case anyone were wondering, the security is the inverse of Twitter's. Everything is logged and access requires a business purpose for a limited time, narrow scope, and approval to get that access. Almost no one has access to production data. PII is taken very seriously. There are no laptops with copies of user data. All laptops are encrypted, just in case, and for general principles. Password complexity requirements are insane. I can see my work/personal FBID user object in the graph, but as soon as I try to prod any links to other users, big warnings appear. There's an army of insane genius security researchers and practitioners who create and deploy defense-in-depth tools for broad and specific solutions to prod, corp, and endpoints that reduce our risks to being compromised, data being exfil'd, and security "oops"es from happening.
Work users who transit through certain "hostile" countries lose some security credentials and access. I'm actually wondering why laptops aren't spot-checked for malware implants and hardware/firmware modifications. I would assume employees with critical access who travel internationally with their work laptops and phones are prime targets.
PS: I wonder if people would pay $X / month (say $199) to have a high-signal social media service that requires a level of "vouching" invitation, names with faces profiles (not visible to the wider internet), sensible/proportional mediation and civilized feedback, politically-neutral, and free speech-loving to increase the sense of community and reduce the potential of anonymous bad actors. 37signals/basecamp accumulated research that showed that smaller communities with faces and real profile names lead to nicer interactions. I don't recall the source, but communities that are defended in terms of politeness and boundaries tend to endure while undefended communities drive away users and tend to disperse.
When I started, my personal account was banned twice on the condition of adjusting 2FA. It was sorted out. I had a past account that I could no longer access as email domains changed, so it looked like I was trying to maintain duplicate accounts. Why it triggered on adjusting 2FA didn't make sense. Then, it wanted me to verify my identity in a way that created a Catch-22 of wanting a login from itself or to a device that didn't have a valid login session. I set the options for better security and left it, rolled all of my personal passwords, eliminated public data, canceled unnecessary social media accounts, and enrolled in security features like Google's Advanced Protection Program. I'm hesitant to make changes risking getting locked-out again.
I recently spoke with someone in Readiness who works with hiring human verifier contractors around the world. A primary issue is scale. You would need to hire the entire world's population to moderate the content being produced. Still there will be bias and misinterpretation of sarcasm, and varying standards of acceptability and decorum. AI is a force-multiplier to an extent, but it takes human judgement to rectify mistakes and data to identify brigades, scammers, terrorists, and political manipulators seeking to exploit an imperfect system. Sadly, the humans with the best judgement typically have better career options that they couldn't be paid to do social media moderation. It can be made better, but the ultimate realization is with even great care, good intentions, and attempts at making things sensible and fair, there are always going to be mistakes. It's trying to minimize mistakes and not enable genocides, election sentiment manipulation, or product scams. It's doable to minimize mistakes, but it take persistent vigilance, wisdom about human nature, and creative solutions to deter and prevent harm while avoiding harming innocent persons. Mistakes are bad and disappointing and it feels bad when they happen.
In case anyone were wondering, the security is the inverse of Twitter's. Everything is logged and access requires a business purpose for a limited time, narrow scope, and approval to get that access. Almost no one has access to production data. PII is taken very seriously. There are no laptops with copies of user data. All laptops are encrypted, just in case, and for general principles. Password complexity requirements are insane. I can see my work/personal FBID user object in the graph, but as soon as I try to prod any links to other users, big warnings appear. There's an army of insane genius security researchers and practitioners who create and deploy defense-in-depth tools for broad and specific solutions to prod, corp, and endpoints that reduce our risks to being compromised, data being exfil'd, and security "oops"es from happening.
Work users who transit through certain "hostile" countries lose some security credentials and access. I'm actually wondering why laptops aren't spot-checked for malware implants and hardware/firmware modifications. I would assume employees with critical access who travel internationally with their work laptops and phones are prime targets.
PS: I wonder if people would pay $X / month (say $199) to have a high-signal social media service that requires a level of "vouching" invitation, names with faces profiles (not visible to the wider internet), sensible/proportional mediation and civilized feedback, politically-neutral, and free speech-loving to increase the sense of community and reduce the potential of anonymous bad actors. 37signals/basecamp accumulated research that showed that smaller communities with faces and real profile names lead to nicer interactions. I don't recall the source, but communities that are defended in terms of politeness and boundaries tend to endure while undefended communities drive away users and tend to disperse.