A guide to potential liability pitfalls for people running a Mastodon instance
This is not about just creating a Mastodon account: it's for people who are running a Mastodon server. If you just made an account on someone else's server, you can safely ignore this.
Mastodon calls each specific server an "instance". My Twitter thread made it super clear that people, even people who are running instances, don't know what this means, so having used the Mastodon technical language in the intro, I will now shift to calling them "servers" from here on out. (In several places I am using more commonly understood terms rather than the correct technical terms.)
I'm only addressing legal/liability issues, not the practicality of running a service. Things like "make backups", "keep backups offsite/on a different network", "try restoring from backup occasionally to make sure they're working", "evaluate every release of every new package installed on the machine you're hosting on to weigh security fixes vs potential for your platform breaking", "lock down the machine you're hosting on to minimize network intrusions", "what kind of content moderation policies you should have for social other than legal purposes" etc, are all outside the scope of this document.
A very kind internet lawyer on Twitter provided a few posts that you may want to read for this, although the second was written in 2010 and doesn't cover some of the other stuff I'm going to get into:
* Copywrong Again: Founding the Next Pinterest or Napster?
* If You Build It, They Will Abuse It
Introduction
Mastodon is a decentralized ('federated') network that makes it very easy for people to start their own Mastodon servers ('instances') and communicate with the larger universe ('the fediverse'). The impending entropy-related demise of Twitter has been prompting a lot of people to start up their own Mastodon servers for them and their friends, and people are thinking about them like starting Discord servers. Because Discord hosts their 'servers' for you, under their URL and on their hardware, the potential liability accrues to Discord, not to the person who started a Discord server.
However, the same isn't true for Mastodon. Because Mastodon servers are self-hosted, appear under URLs the server owner controls, and are on hardware that server owners arrange the details of, the potential liability for anything posted on an individual Mastodon server (including content that was originally posted on another Mastodon server but appears under your URL due to federation) accrues to the individual server owner, not to Mastodon gGmbH, the nonprofit that handles the code and oversees the protocol.
If you control any platform on the internet that accepts user-generated content, there are multiple sources of liability that can land on your head. Some of them can be mitigated with a few simple actions, some of them can be mitigated with policy documents, and some of them you can look at and go "this risk is small and the potential outcomes are tiny, so my personal risk assessment says that I can ignore it". You need to make those decisions with an informed sense of the potential risks, however.
This document is intended to cover the absolute basics of "potential liability sources for running an online service in the US". It also applies to services that accept user-generated content other than Mastodon: if you host your own forum, you should think about these things too. It does not apply to services that you don't host/control, such as Discord or Slack. The general rule of thumb: if you pay a company that isn't the company that makes the product to host it, and/or the content appears at a domain you registered and control, this all is probably stuff you should think about.
Legal structures and considerations for overall ass-covering
The absolute safest thing to do, to shield your own personal assets, is register a LLC (limited liability company), get a separate bank account in the name of the LLC, transfer any assets and liabilities (donations you receive / bills you pay) to the LLC, and get insurance in the name of the LLC. This is obviously complete overkill for anyone who's running a really small server, especially because the annual fees for LLC registration are likely to exceed whatever amount your users chip in, but if you're running an open-registration server or you exceed 20-30k users, or you have a lot of personal assets, you should think hard about it and talk to a lawyer. (Especially because there are lots of ways to fuck up a single-person LLC and lose the liability protection.)
If you decide that registering a LLC is overkill, you should increase your own personal insurance coverage. Your homeowners' or renters' insurance should let you add an umbrella rider that will give you liability coverage (including paying a lawyer to defend you if you're sued) relatively cheaply. I recommend a policy minimum of $2m of coverage per incident. In 99% of cases, you won't need it in the slightest; in that last 1% of cases, it will save your fucking ass.
If you accept donations, sell merchandise, or collect money in any way, the IRS is going to want you to pay taxes on that money. Talk to an accountant about how you can minimize your tax liability. If you only have small amounts of income related to the enterprise, you can probably skate under the radar with only a tiny increase of tax-bill-related risk, but the IRS has been leaning on most money transfer platforms like PayPal to lower the threshold at which they send you tax documents lately. (PayPal used to be $20k, for instance; now it's $600.)
If your money transfer platform issues you a 1099-K at the end of the year because you crossed their reporting threshold, that 1099-K has to be accounted for on your taxes or else the IRS will 'correct' your return. If the IRS corrects your return, they will not apply any possible cost-of-doing-business deductions like server hosting cost against that income, which can result in you owing way more in taxes and penalties than you should.
Keep track of all of the money you spend on running the server -- hosting costs, domain registration costs, etc -- and how much money you take in. If your money transfer platform issues you a 1099-K, take it all to an accountant, fling it on their desk, and say 'help' and they will.
Depending on the activity of your users, you may receive contacts from law enforcement asking you for information about your users. I get into that at the very end of the document.
Copyright
The relevant section of US law that applies to US-based online platforms is 17 USC §512 aka the DMCA aka the Digital Millennium Copyright Act. It says that online platforms are not liable for the copyright violations posted by their users if they 1) do not have "actual knowledge" of infringing activity and 2) register a designated agent with the US Copyright Office to receive and handle reports of copyright violations on your platform and post a notice saying who your designated agent is.
Even if you have a single-user Mastodon server, the fact Mastodon can cause federated content (other people's posts) to show under your URL means that you should register a designated agent. If a rightsholder sees the YourServer copy of an infringing post, they will go after you because it appears under your URL. The recent surge in "automated DMCA enforcement" copyright troll legal shops means that you should register a designated agent, check the email address you give regularly for copyright violation DMCA notices, and follow the process set forth in the law for handling them.
You register a designated agent by signing up with the Copyright Office. You used to be able to use a PO box, but they've changed that: you must give an actual street address (although I've seen people still registered with PO boxes, so they might not check very hard). If you have security or doxxing concerns, find a private mailbox service that gives you a 'real' looking street address. The cost to register a designated agent is $6 and you're required to re-register every 3 years or whenever there's a meaningful change in your registration information. (Note that the copyright office sends you your renewal notice three months early, and if you renew the second you get the notice, you lose 3 months' worth of fee you paid. It's only $.25, but it's the principle of the thing.)
You can look at our DMCA policy for an overview of what a DMCA notice is required to contain and what the process looks like. You need to post a similar document on your platform. (Ours is CC-BY-SA and you can use it if you want. Please do note -- I'll get to it in a second -- that we deliberately assume a small amount of potential legal risk to mitigate some of what we feel are the worst abuses of the DMCA process.)
If you get a DMCA notice that doesn't contain all 6 required items, you can tell the rightsholder that they need to revise their notice. When you receive a notice that contains everything it needs to contain, you're required by law to "respond[...] expeditiously to remove, or disable access to, the material that is claimed to be infringing or to be the subject of infringing activity". In practice, this means "do it as soon as you get it"; the exact definition of "expeditiously" is fuzzy.
Once you've disabled the material that's claimed to be infringing, the person who posted it can file a counter-notification saying that they don't believe their use of the content is infringing. If they file a counter-notification, you need to forward the counter-notification to the person who filed the DMCA notice so they can file a lawsuit over use of the material if they disagree. If you do get a notice that the rightsholder has filed a lawsuit, the material needs to stay down. If you don't get a notice that the rightsholder has filed a lawsuit, you need to restore access to the material that's claimed to be infringing no sooner than 10 days and no later than 14 days.
The law requires you to "provide[...] for the termination in appropriate circumstances of subscribers and account holders of the service provider’s system or network who are repeat infringers". The definition of "repeat offender" is not articulated in the law; most providers have settled on a "3 strikes" or "5 strikes" policy.
The risk we deliberately and consciously assume is that we allow for a "I'm not going to file a counternotification but I feel that my use here is fair use" response from the user who posted the allegedly infringing material that doesn't count as a "strike" for the purposes of determining repeat-offender status. This is because filing a counternotification means providing all your contact information to the person who filed the original DMCA notice, so it's a very common abuse tactic for someone to file a flood of complete bullshit DMCA claims against someone whose contact information they want to get so they either nuke the person's social media account or force them to turn over personal information. Our policy adds a tiny bit of risk to us in exchange for closing off that attack vector; we're willing to do that because subsequent case law has acknowledged that providers can reject notices that aren't issued in "good faith".
That precedent, established in Lenz v. Universal Music Corp., 801 F.3d 1126 (9th Cir. 2015), says that providers should conduct a fair use analysis before accepting any DMCA notice (and rightsholders should conduct a fair use analysis before issuing any DMCA notice, but good fucking luck there). If you're willing to get down into the weeds of copyright law and really stay on top of case law, and you're comfortable with assuming a small amount of extra risk, you can reject notices for content you believe is fair use or notices you feel are issued for abusive purposes and tell the rightsholder that you won't be processing the notice. It is a risk, though, and if you're in a life situation where you need to be more risk-averse, just process every notice you get that has the six required elements.
Failure to comply with the steps necessary to claim "safe harbor" under the DMCA opens you up to being held liable for any copyright infringement on your server. Court judgements for copyright violations can be absolutely massive -- starting at six figures and only going up -- so you should do everything you can to comply.
The DMCA itself only applies to servers hosted in the US. However, it implements several WIPO international treaties, and most other countries have some form of similar obligation placed on server operators to handle copyright violations on their server. (The EU's 2019 Copyright Directive and Australia's News Media Bargaining Code are much more brutal, for instance.)
The Children's Online Privacy Protection Act (COPPA)
The Children's Online Privacy Protection Act of 1998 (COPPA) is a United States federal law, located at 15 USC §§6501–6506. (Read it online by starting with 6501 and keep hitting 'next' until you get to 6506.) It specifies loads of things you need to do in order to collect data from children under 13, including getting parental consent to let someone under 13 create an account.
Complying with those things you need to do to let someone under 13 create an account (and being able to prove that you've complied with them if the FTC ever comes knocking) is fucking irritating. In practice, almost every service in the US that's not specifically aimed at kids complies with COPPA by not letting children under 13 register for their service, usually by requiring users to submit their date and year of birth when they register and blocking registration from anyone whose DOB makes them under 13. (If you lack the ability to block registration from anyone under 13 after verifying DOB, don't accept the DOB information; just put "you must be 13 to hold an account on this service" in a very prominent place in your signup pathway. If you collect the DOB but can't act on it, it's affirmative proof that you knew the user was under 13 and let them sign up anyway.)
The FTC is the regulatory agency that enforces COPPA, and it does not fuck around. The largest COPPA violation fine ever issued was against TikTok, for $5.7 million. Penalties are "up to $43,280 for each violation", assessed against the service. Fortunately, risk mitigation is easy: just don't let anyone under 13 sign up for your server. As long as you're not "directing your service" to people under 13 and you don't allow signups by people under 13, you don't trigger any of COPPA's recordkeeping and permissions requirements. (The exact details of what "directing your service" to children is live at 16 CFR §312.2.)
The FTC has a useful FAQ on COPPA compliance.
This is US legislation, but the US enforces it against any platform that accepts signups from the US, even if the platform is not US-based. Enforcement is significantly less likely if you're based outside the US and don't deliberately market to children or make your site "directed to children" as defined above, but unless you specifically want to allow kids on your server, just block registration from anyone under 13.
Child sexual abuse material
"Child sexual abuse material" (CSAM) or "child sexual exploitation material" (CSEM) is the preferred term for what people (including, regrettably, US lawmakers) call "child pornography".
The law around it is a giant messy ball of scenarios, exceptions, exceptions to the exceptions, etcetera, and I can only cover the bare minimum. There are multiple levels of liability involving CSAM. The strictest, 18 USC §2251, covers "any photograph, film, video, picture, or computer or computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit conduct" in which a minor (someone under 18) is "engaging in sexually explicit conduct", or "is a digital image, computer image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging in sexually explicit conduct", or "such visual depiction has been created, adapted, or modified to appear that an identifiable minor is engaging in sexually explicit conduct" (18 USC §2256(8).) This covers photo and video of a minor or a computer-generated image indistinguishable from a minor engaged in sexually explicit conduct.
Drawn/artistic images of an apparent minor engaged in sexually explicit conduct, or written depictions of an apparent minor engaged in sexually explicit conduct, or "material that is harmful to minors" ("any communication, consisting of nudity, sex, or excretion"), that are also "obscene" are in the second category. The definition of obscenity was set forth in the court case Miller v California, 413 U.S. 15 (1973), and has been back-adopted into the US code in a few places, this included. The definition of obscenity is material that, "taken as a whole and with reference to its context,
1. predominently appeals to a prurient interest of minors;
2. is patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable material for minors; and;
3. lacks serious literary, artistic, political, or scientific value for minors."
If you find, on your server, whether it's locally posted or federated content, photo or video of a minor engaged in sexually explicit conduct or computer-generated images that are indistinguishable from a minor engaged in sexually explicit conduct (2251, 2252), someone advertising that they have images or video of a minor engaged in sexually explicit conduct for sale anywhere else on the internet (2252A), someone attempting to induce a minor to produce images of themselves engaged in sexually explicit conduct (2251), someone attempting to buy or sell a minor for the purposes of producing images or video of sexually explicit conduct (2251A), someone linking to a domain name that misleads someone into viewing "material that is harmful to minors" that is also obscene (2252B, 2252B(d) for the definitions) you must follow the reporting requirements in 18 USC §2258A to report it to the National Center for Missing and Exploited Children (NCMEC)'s CyberTipline. You used to need to register with them to make a report, but pleasantly, they've changed that; these days you can just report it without the account.
You are affirmatively required by 18 USC §2258A to "preserve any visual depictions, data, or other digital files that are reasonably accessible and may provide context or additional information about the reported material or person" and "maintain the materials in a secure location and take appropriate steps to limit access by agents or employees of the service to the materials to that access necessary to comply with the requirements of this subsection". This means you must not delete it and the associated information about the poster until law enforcement tells you that you can, but you do have to make it not-visible.
As long as you follow these obligations, 18 USC §2258B immunizes you from criminal or civil liability for any CSAM posted to your server (unless you act or fail to act with "actual malice or reckless disregard", which practically speaking means "you knew people were trading CSAM on your server or any reasonable person would have been able to figure out people were trading CSAM on your server").
Material that is not in one of those "mandatory reporting" categories but does fall into the wider universe of "stuff involving minors that is only illegal if it is also obscene" doesn't need to be reported to NCMEC, but you may still be liable for it. (See United States v Thomas Arthur, in which Mr Arthur was convicted of possessing CSAM images, but also of multiple counts of running a website that contained no CSAM but did contain written depictions of erotic activity involving minors that he did not personally author but were hosted on his site. It's Western District of Texas, which is massively conservative and the "contemporary prevailing community standards in the adult community" is extremely conservative for the determination of the Miller test: still, hosting the website added an extra 15 years to his sentence.)
18 USC §2258A affirmatively does not require you to proactively search your service or monitor your users for violations of any of these laws. If you do want to, however, a consortium of researchers, online providers, and law enforcement agencies have developed PhotoDNA, a service that lets you compare images your users upload to a database of hashes of known CSAM material. (The service, operated by Microsoft in partnership with law enforcement, doesn't store the CSAM themselves; NCMEC and the International Center for Missing and Exploited Children perform mathematical operations on the images to produce a "fingerprint" that's used for the comparison.) They offer a cloud-based API that you can use, and the service is free.
Whether or not small providers should use PhotoDNA is a hotly debated topic in content moderation that I'm not going to get into here, because I'd be writing this all week. It's immensely helpful to help services find CSAM trading rings they otherwise wouldn't; it's also an opaque effort by a public-private-law enforcement consortium that's had some but not exhaustive levels of scientific validation and is, by necessity, somewhat of a black box system. (To say nothing of the technical hassle of setting it up.)
The laws in this section only apply to people and servers located in the US. If you are outside the US, please get legal advice on your country's obligations they impose on providers regarding CSAM. It is the single thing you absolutely should not fuck around with. (Especially if you are in Australia, whose laws are fucking ridiculous about anything that even gestures near CSAM.)
General Data Protection Regulation and Digital Services Act
The GDPR (General Data Protection Regulation) and DSA (Digital Services Act) are EU regulations having the force of law in the EU. (GDPR still applies in the UK despite the UK having left the EU; GDPR also applies in countries that are members of the European Economic Area that are not also EU members.) GDPR is a privacy and data protection law; DSA is a law about illegal content online (and how services handle/moderate it, and how they keep their users informed of their moderation policies).
If you're in an EU country or an EEA country, and you're reading this for the framework of "what possible things should I ask someone who knows more about EU law about", please skip this section and go find a lawyer who's licensed in the EU and familiar with GDPR and DSA. I am directing this section at US platforms only.
Both GDPR and DSA are supposed to apply to any service, even those from outside the EU, that has EU users.
The important parts of GDPR:
* you can't process someone's data unless you have a "lawful purpose";
* consent to process data must be "a specific, freely-given, plainly-worded, and unambiguous affirmation given by the data subject";
* you must have a "concise, transparent, intelligible and easily accessible" privacy policy;
* you must "provide, upon request, an overview of the categories of data that are being processed" as well as "a copy of the actual data";
* you must allow people to opt out of being tracked for marketing purposes;
* you must allow people to request the erasure of all the data you have stored on them;
* you must not transfer data of EU citizens outside the EU without consent;
* you must report data breaches to EU regulators within 72 hours of you discovering them, as well as to your users;
* you must design all new features with data protection in mind;
* you must get GDPR compliance certifications from any company you send data to, including the operators of any plugins
Further obligations are imposed on any business that has more than 250 employees globally, which is not likely to be an issue for anyone running a Mastodon server, and if it does you probably have enough resources to get actual legal advice.
Practically speaking, small US services are very unlikely to ever run into GDPR compliance issues. A few folks got somewhat into the weeds of what features Mastodon offers server admins to comply with GDPR in my Twitter thread, and the consensus was that as long as you have a written privacy policy you are probably okay, especially if you're in the US. Our lawyer's conclusions, for DW, was that our existing practices and privacy policy were good enough, and we were small enough, that our risk exposure was very low, and Mastodon server operators in the US are probably even more protected because GDPR only applies to entities engaged in "economic activity". Absolutely do not take my word for it, though; talk to an actual lawyer with experience in GDPR compliance.
Some Mastodon servers are blocking any server that's located in the EU and restricting signups to avoid having any EU citizen data on their server. That's one way to minimize your GDPR risk exposure, and if you're really risk-averse, I recommend it.
The Digital Services Act covers the liabilities and responsibilities of services around notice-and-takedown of illegal material, disinformation and harmful content, and algorithmic targeting and advertising targeting. It isn't in effect yet -- providers have until January 1, 2024 to come into compliance. It is exceptionally vague, offers little in the way of implementation guidelines, and nobody has any idea yet what it's going to look like in practice. It is a fucking terrible law. We are all mostly waiting around until someone comes up with some best practices we can all just copy, especially because "micro-enterprises" are exempted from the worst of the requirements. Make a note to check around in six months or so and see what US businesses with no EU presence and minimal EU users come up with.
California Online Privacy Protection Act (CalOPPA)
CalOPPA is California's version of GDPR (thankfully without a lot of the really cumbersome bits). You need to follow it if you have any California-based users. It requires that you:
1. have a privacy policy;
2. that is prominently linked on your homepage or on every page of your site;
3. that you comply with;
4. and includes information about:
a) categories of personally identifying information you collect;
b) all third parties with whom you may share personally identifying information;
c) a description of the process by which your users can request changes to their personally identifying information;
d) a description of how you'll notify your users about any major changes to your privacy policy;
e) the effective date of the privacy policy.
"Personally identifying information" is defined as: first and last names, physical address, email address, telephone number, Social Security number, any other contact information both physical or online, date of birth, details of physical appearance, and any other information stored online that may identify an individual.
As long as you have a privacy policy, your privacy policy contains all of that required information, and you follow the privacy policy, you're good here. Our privacy policy is also CC-BY-SA, but you shouldn't use it wholesale: it needs a lot of editing for your actual situation, and it would not hurt to run it by a lawyer.
FOSTA/SESTA
FOSTA/SESTA are bills passed in the US that became law in 2018. They are absolute fucking vague and damaging bullshit. They basically say that anything having to do with "knowingly assisting, facilitating, or supporting sex trafficking" means that Section 230 immunity doesn't apply.
What constitutues "knowingly assisting, facilitating, or supporting sex trafficking"? We don't fucking know! What distinguishes consensual sex work from sex trafficking for the purposes of this law? You know the legal system is trying to say that all consensual sex work is actually sex trafficking. (Obligatory reading: the Backpage saga.) Only one person has been prosecuted under FOSTA/SESTA so far, the owner of the now-defunct CityXGuide, and that was in the Northern District of Texas, which, like its Western cousin, is also famously conservative about anything involving sex work. Some sites interpret their obligations as "you can't have any sex workers on the platform"; some interpret it as "you can have sex workers on the platform but they can't talk about sex work". Switter, a platform for sex workers, ran into multiple issues finding providers that would provide them services because of FOSTA/SESTA; they were a Mastodon server with some tweaks.
Many, many, many sex workers have written excellent analyses of the regulatory and legal backdrop involving discussion of sex work online and how platforms and providers address (or don't address) distinguishing consensual sex work from coercive sex trafficking, and I urge you to find some of them and read their excellent work. (Ashley Lake posts a lot about the topic; she's a great starting point.)
What you do about this will, like almost everything else in this guide, come down to what your personal risk tolerance level is.
Other countries' laws
India has just passed the "Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021"; Russia has Federal Law No. 530-FZ (On Amendments to the Federal Law "On Information, Information Technologies and Information Protection"), the UK has a pending, not-yet-passed "Online Safety Bill", various other countries have their own laws regulating social media, etc. All of them try to define "bad stuff you must take down" and "reports you need to make to us". Most of them kick in at a certain size threshold (Russia's strictest laws only apply to sites with over 500,000 users a day; some countries go by numbers of employees, etc.) Russia even goes so far as to say that you can't hold any data on Russian users on servers that are physically located outside of Russia.
Many countries have some form of "internet ministry" that keeps a registry of sites on the Naughty List and requires ISPs in that country to use the naughty list as a blocklist/filter list for all of their customers. Russia's is Roskomnadzor aka Rozkom (fuck Rozom, seriously); the UK is proposing to give that ability to Ofcom, China has the whole Great Firewall run by the Cyberspace Administration of China etc.
Roskom will regularly send you nastygrams if someone inside Russia found your platform and discovered content that they don't like, including content that is "unreliable information" or "spreads anti-Russian materials". They will tell you that you have to take it down or else (the "or else" is rarely explicitly spelled out). Assuming you're in the US and do not visit Russia, you can safely ignore Rozkom (because fuck Rozkom, seriously). We send all their mail directly to trash and never even look at it; we are, correspondingly, blocked in Russia, but almost everyone in Russia knows how to access the internet by VPN or Tor anyway.
If you wind up with a large number of Russian dissidents on your service, Rozkom may send you individualized reports that aren't from the same source as their semi-automated nastygrams, asking you to remove content or asking for information about your users like IP addresses, etc. Do not give them any information unless they domesticate a subpoena in the US (more about that later). I, personally, judge my personal risk such that, after having refused at least one request for user information, I will not physically visit Russia or any state that has close communication with Russia (such as Belarus), but we have a lot of Russian dissidents on DW who came over from LJ when LJ got sold to a Russian company, I've directly done multiple things that annoyed the current owners of LJ aka the state bank of Russia aka Putin's cronies, and also, I'm queer. This resolution is likely overkill for anyone who has not directly and personally pissed off one of Putin's cronies. (You will know if you have directly and personally pissed off one of Putin's cronies.)
China's CAC rarely contacts actual platforms; they handle all their censorship via the Great Firewall. You may wind up on the Great Firewall, but again, people living inside China with ties to the non-Chinese internet are very good at VPNs and proxies.
The UK has not yet passed the Online Safety Bill (and if you're in the UK and reading this: call your MP and tell them it's a fucking terrible law and they should not pass it). If they do, you'll have to calculate whether you have enough users in the UK that Ofcom blocking you would be massively disruptive. (If you're a user in the UK, you might want to download a good VPN that will let you set your location to outside of the UK if the OSB passes, because a lot of small US-based sites are just going to go ahead and get blocked rather than complying with the bullshit, but you probably already have one for all the US sites that block EU users because of GDPR.)
Various other countries may occasionally try to contact you asking for information about your users or asking you to remove content that they think violates their laws. As I mentioned above, our policy is that anyone who wants information from us must obtain a domesticated US court order requiring us to provide that information before we will, unless the country has entered into an agreement with the US under the CLOUD Act and the data being requested is covered by the CLOUD act. (The list of countries with reciprocal agreements is maintained by the Justice Department and is currently the UK and Australia.) We do not remove material that's illegal under another country's laws but not US law, unless we believe the material violates our Terms of Service in some other way.
The Stored Communications Act, National Security Letters, and law enforcement requests for data
The Stored Communications Act, 18 USC Chapter 121, covers the details of what you can and can't disclose about your users' private communications. In practice, the exact details of what the SCA applies to is fuzzy for social media where some communications are intended to be public and some are intended to be private or for a limited audience of recipients. You should generally treat any post other than completely public as covered by the SCA.
Based on all the government subpoenas and warrants I've ever had to handle during my entire career, I will note that some law enforcement agencies write their warrants very specifically -- asking only for metadata and subscriber information and not any stored communications -- and some agencies write their warrants extremely sloppily. (The US Marshal's Service, the agency that handles federal arrest warrants and violations of federal parole, among other things, gets my gold star for the most narrowly-scoped search warrants I've ever seen, for the record.)
18 USC §2702 covers when you are permitted to voluntarily disclose the contents of private communications that are covered by the SCA:
1) when you're disclosing the communication to the person the poster intended it to be seen by;
2) under circumstances covered by 18 USC §2517, 18 USC §2511(2)(a), or 18 USC §2703;
3) when the poster authorizes you to disclose it;
4) when you're disclosing it to a service or provider that's responsible for getting it to where it's intended (ie, you can send it to another Mastodon server if the recipient is on that Mastodon server);
5) if you disclosing it is necessary for "protection of the rights or property" of your server;
6) if you're reporting something to NCMEC as you're required to by 18 USC §2258A;
7) to law enforcement, if you became aware of the contents through the normal administration of your server and you reasonably believe it involves the commission of a crime;
8) to a government, if you "in good faith, believe[...] that an emergency involving danger of death or serious physical injury to any person requires disclosure without delay of communications relating to the emergency";
9) to a foreign government, "pursuant to an order from a foreign government that is subject to an executive agreement that the Attorney General has determined and certified to Congress satisfies section 2523" (aka 18 USC §2523).
18 USC §2703 covers when you are required to disclose the contents of communications covered by the Stored Communications Act: when you receive a search warrant or subpoena that was issued by a United States court, or a foreign search warrant from a country that has reciprocity with the US (as covered by 18 USC §2523). (The list of countries with reciprocal agreements is maintained by the Justice Department and is currently the UK and Australia.)
If you disclose the contents of communications covered by the Stored Communications Act when you shouldn't, you can incur liability. The safest stance to take is that you categorically will not disclose the contents of any subscriber data, whether that's "limited-reach post, DM, PM, or other private communications covered by the SCA" or "metadata and subscriber records such as saved IP addresses used to access the site, email address provided at registration, and the recipient of communications but not the actual contents", without a search warrant unless it's reporting CSAM to NCMEC as required by 18 USC §2258A.
You are allowed to charge the requesting agency for complying with a subpoena to produce electronic records. The exact amount you're allowed to charge varies by state law and whether it's a state agency or a federal agency. You are also allowed to object to a subpoena on the grounds that the data it requires you to produce is "not reasonably accessible because of undue burden or cost". This isn't something you'd DIY; if you want to charge a fee or move to quash the subpoena because of undue burden, you will need a lawyer.
There's a small chance you may also receive a National Security Letter, asking for metadata about a user or a post. NSLs can't be used to access "stored communications", only metadata. NSLs are authorized by the Electronic Communications Privacy Act of 1986, certain parts of the PATRIOT Act of 2001, several reauthorizations of the PATRIOT act in subsequent years, and various case law (all of which mostly concern 18 USC §2510-2523). NSLs do not need to be signed by a judge and do not need to be ordered by a court.
If you do get one, it is likely that the NSL will include a nondisclosure provision, ie "you can't tell anyone you got this letter", if the director of the FBI certifies "that otherwise there may result a danger to the national security of the United States; interference with a criminal, counterterrorism, or counterintelligence investigation; interference with diplomatic relations; or danger to the life or physical safety of any person". The constitutionality of the nondisclosure provision has been litigated multiple times; some of the worst abuses have been mitigated. However, if you get a NSL, you should immediately find a lawyer who's experienced in handling NSLs who can tell you what to do and whether you can contest the NSL. If you freeze at the thought of trying to find someone, call the EFF, and read their whole back catalog of posts about NSLs.
In my entire career of doing this stuff, no site I've ever worked for has ever received a NSL: it's rare for smaller sites to get them unless you attract a userbase that may be under investigation for potential terrorist acts or violations of national security. (I would bet cash American money that Truth Social and Gab have each gotten at least one.) I'm including information on this mostly so that, on the extremely off chance you do get one, you don't freak out. Don't post about getting it, don't tell your SO/friend/partner/therapist/etc about getting it, just call the EFF. Don't tell them you got one, either: say "I need a referral to a lawyer who is experienced with National Security Letters" rather than saying "I got a NSL and need a lawyer for it".
End notes
I will repeat that this document isn't legal advice; it's intended only to familiarize you with concepts that you potentially will have to deal with if you run any platform of any size that accepts or displays user-generated content in any way. I've had to deal with every consideration on this list except FOSTA/SESTA and National Security Letters at some point in my career, and the point at which you should be familiar with them is fewer users than you think.
I would say that DMCA notices are the ones you're likely going to have to deal with first out of everything on this list, and the way Mastodon handles federation means that you could see your first at very, very few users. If you are very lucky, you won't ever have to deal with CSAM ever, but it's also possible you may have to deal with "someone over the age of 18 is soliciting nudes from someone under the age of 18" relatively early, and yes, that does count as something you have to report to NCMEC (under 18 USC §2251) if you are made aware of it.
COPPA, GDPR, and CalOPPA are things you can cover your ass on by having a privacy policy that covers everything you do with data and preventing anyone under the age of 13 from registering for your server.
I didn't mention it anywhere in this document, but you should also have a formal Terms of Service, distinct from the privacy policy and DMCA policy but incorporating them by reference. Our Terms of Service is also CC-BY-SA, and you're welcome to use it as a basis for yours as long as you edit the parts that refer to our specific company name and contact information. (I've also done recent Twitter threads about the Terms of Service of another newly created social media platform that covered a lot of issues I found with that site's ToS; if you have questions about any of the clauses in the ToS, why they're there, or what they mean, I can answer those in general terms.) (But again, it's not legal advice and you should talk to a lawyer who is competent in drafting Terms of Service and doesn't just copy and paste clauses from other places.)
Since I'm posting this for a wider audience, I will leave this post open to discussion from people who don't have a DW account. If you choose 'anonymous' as a response type, please sign your comment with some form of name or pseudonymous identifier so I can identify multiple comments from the same person! You can also log in using OpenID if you have an account somewhere on the internet that serves as an OpenID provider.
Page 1 of 2