Censorship Monitoring Project User Stories
An unordered list of user stories for each actor.
User stories should be in Mike Cohn's recommended format:
As (an actor) I want (feature) so that (benefit to that actor)
These stories should be expanded into Gherkin scripts for development as we decide to implement them. A start on this has been made using the Relish collaborative tool and feedback on this approach is requested.
Where more than one actor could participate in a story please list the story under the actor who will benefit the most from the feature / use the feature most often / be most likely to use the feature.
Web Site
Human Website Users
As a user I want to...
- 1.0. use blocked.org.uk anonymously so that my privacy is preserved.
- 1.1. report a blocked site so that I can feel like I've done something useful about the problem.
- 1.2. report why a network says it has blocked a site so that I can help other users to judge their actions.
- 1.3. indicate whether I agree with the network's reasoning for blocking the site so that others can see my opinion.
- 1.4. view the opinion of other users about site blocks so that I can compare my beliefs to those of the user-base.
- 1.5. complain about a blocked site so that I can get the block lifted.
- 1.6. find out whether a site has been reported as blocked so that I can [take appropriate action].
- 1.7. find out why a network has blocked a site so that I can judge their action according to my own beliefs.
- 1.8. receive alerts when the blocked status of a site changes so that I can [take appropriate action].
- 1.9. view a history of blocking reports so that I can determine the rate of applied censorship.
- 1.10. browse a list of blocked sites so that I can marvel at the state of censorship in the UK.
- 1.11. filter the list of blocked sites by network so that I can see what is censored on the networks I use or find a network whose censorship policies match my own beliefs.
- 1.12. download raw probe data so that I can analyse it differently.
- 1.13. report that a site is no longer blocked so that the accuracy of site data can be improved.
Bots, spiders and screen-scrapers
As a search-engine spider I want to...
- crawl the site so that I can index it and make the contents available in a search engine
Administrators
As a system administrator I want to...
- Add, edit and delete user records held within the system so that I can manage access.
- Authorise a probe to access the API so that it can check receive URLs to check and report results.
- Revoke a probe's authority to access the API so that I can protect the system from malicious or faulty probes.
- Maintain a "canary list" of sites that should never be censored so that probe configuration problems (or extreme levels of censorship) can be detected.
- Maintain a second "canary list" of sites that should always be censored so that connections can be detected or verified as unfiltered.
- Maintain a blacklist of URLs that should never be sent to probes for automatic testing so that I can prevent abuse.
- Set a minimum period between which checks of the same URL are made so that redundant data are not collected.
Probes
The OONI project has a set of use-cases that probably intersects with these.
As a probe I want to...
- ask for URLs to test so that I can check them for censorship.
- detect information about the networks to which I am connected so that I can include this in my reports.
- report whether a URL is censored on the network(s) to which I am connected so that the quality and quantity of available evidence about censorship is improved.
- detect the method being used to censor a URL I am checking so that I can include this information in my reports.
- detect the filtering settings that are applied to the networks to which I am connected so that I can include this information in my reports.
As a probe user I want to...
- set when a probe can test a network so that I can limit operation to hours that are convenient.
- select the type or degree of censorship I believe applies to the probe's network connection so that the evaluation of censorship results can be improved.
- view statistics about the URLs my probe has tested so that I can check the system is working properly.
- view which URLs my probe has tested and what the results were so that I can better understand how my connection is censored.
- specify which categories of site I want the probe to check so that I can avoid appearing to have accessed controversial sites from my connection.
- manually verify a censorship result so that I can improve the quality of data submitted to the project database.
Middleware
As the submissions API I want to...
- Receive URLs to check for censorship so that they can be sent to probes.
- Receive requests for information about the censorship of a URL so that I can return relevant data.
As the probes API I want to...
- push URLs to probes so that they can check them for censorship.
- receive censorship results from probes so that I can update the database.
- transform results received from different types of probes into a common format so that disparate information can be compared.
- add results received from probes to the database so that the body of censorship evidence is improved.
Malicious actors
As a spammer I want to...
- submit links so that people follow them.
- submit links so that traffic to those sites seems to increase.
As a virus provider I want to...
- submit links to viruses so that viruses are circulated to probes and installed.
- submit links to viruses so that virus links are published by blocked.org.uk clicked on and installed by visitors.
As a cracker I want to...
- submit links containing SQL injection strings so that I can attack websites.
- query the database other than through the provided interface so that I can bypass access or rate-limit controls.
- have a probe execute a test directly so that I can bypass the middleware scheduling system.
- masquerade as a privileged user (such as an ISP user, a probe-user or an administrator) so that I can gain unauthorised access to privileged features.
- request a large volume of checks for a particular URL so that I can overload its webserver and cause its users to be denied service.
As an opponent of the project I want to...
- direct probes to access illegal content so that I can expose the project to legal liability.
- overload the system so that I can cause service to be denied to other users.