Future Plans

Page last updated: August 8, 2019 (view history)
Contents

This page describes general plans for the future development of Tildes. The details of how these features might work will most likely evolve significantly as they are implemented and experimented with.

For information about how the site currently works, see the Instructions pages.

Trust/reputation system for moderation

One of the few constants of online communities throughout their whole existence has been that they tend to start out good, but have trouble maintaining their culture as they grow, their quality dives rapidly, and they die. It's been happening from the very beginning with examples like CommuniTree (one of the first BBSes), Usenet with the well-known "Eternal September", and it's continuing to happen today.

One of the most common ways that communities defend themselves is by appointing moderators—people entrusted with defining and enforcing the norms of behavior for the community. This is an effective system, but has its own weaknesses, including difficult decisions about which users should be made (and allowed to remain) moderators.

In my experience, it's always been the best approach to select new moderators from the people known as active, high-quality members of the community. My goal with the trust system on Tildes is to turn this process of discovering the best members and granting them more influence into a natural, automatic one.

It's worth noting that the process does not need to be entirely automatic. The trust system won't necessarily be a complete replacement for manually promoting users, and a combination of both systems may end up working best.

Trust based on consistency and accountability

Trusting someone is a gradual process that comes from seeing how they behave over time. This can be reflected in the site's mechanics—for example, if a user consistently reports posts correctly for breaking the rules, eventually it should be safe to just trust that user's reports without preemptive review. Other users that aren't as consistent can be given less weight—perhaps it takes three reports from lower-trust users to trigger an action, but only one report from a very high-trust user.

This approach can be applied to other, individual mechanics as well. For example, a user could gain (or lose) access to particular abilities depending on whether they use them responsibly. If done carefully, this could even apply to voting—just as you'd value the recommendation of a trusted friend more than one from a random stranger, we should be able to give more weight to the votes of users that consistently vote for high-quality posts.

Restricted by group, with decay

Trust should be largely group-specific. That is, users should need to actively participate in a particular community to build up trust in it. Because Tildes will have a hierarchy of groups, there are some possibilities with having trust work inside the "branches"—for example, a user that's highly trusted in one music-related group could be given some inherent trust in other music-related ones, but not necessarily anything in groups related to, say, TV shows.

Another important factor will be having trust decay if the user stops participating in a community for a long period of time. Communities are always evolving, and if a user has been absent for months or years, it's very likely that they no longer have a solid understanding of the community's current norms. Perhaps users that previously had a high level of trust should be able to build it back up more quickly, but they shouldn't indefinitely retain it when they stop being involved.

Between these two factors, we should be able to ensure that communities end up being managed by members that actively contribute to them, not just people that want to be a moderator for its own sake.

Increased punishment effectiveness

One of the core reasons that platforms have so many issues with abuse is that their punishments have little impact. Banned users are often able to immediately create a new account that has identical capabilities to their previous one. Trying to remove persistent malicious users can be an endless game of whack-a-mole where it requires more effort to punish abusers than it does for them to circumvent it.

By having users gradually build up trust in individual communities, "established" accounts can be far more capable than brand new ones, which adds some actual weight to punishments. If implemented well, this should cause little inconvenience for regular users, but make it far, far more difficult for malicious users to cause trouble.

Concerns

To be clear, I recognize that this is a dangerous type of system to implement, with the distinct risk of creating "power users" that have far too much influence. However, all systems have similar risks—even if all users are equal, people can form groups or abuse multiple accounts to increase their influence. These types of issues are social and can only be solved with oversight, accountability, and a willingness to punish people that abuse the system, not technology alone.

Many aspects of this system will need careful observation and tweaking to ensure it works as desired. We don't want to end up incentivizing the wrong types of behavior by creating systems that, for example, give more influence to the most popular users instead of the highest quality ones. It won't be a simple process, but I believe a system like this will be able to make a huge difference in maintaining the quality of a community as it grows.

The text of this page is licensed under Creative Commons Attribution-ShareAlike 4.0.
You can propose changes to this page by editing the copy of it available in the wiki for the ~tildes.official group on Tildes itself.