Saturday, December 20, 2014

Video disclosure, privacy, and protocol

Yesterday I had the privilege of attending (a small part) of the hackathon at the Seattle police department.  For those of you who don't know why a police department would be having a hackathon, I encourage you to read this article.  In short, the Seattle police department has collected a vast amount of video data from in-car and other cameras.  With the current outcry for increased transparency in government, and especially police departments, videos have increasingly been the target of Freedom of Information Act requests.  The Seattle police department has been requested to provide their complete video archives in an FOIA request.  The SPD has deemed this a legitimate request but faces a daunting task.  State law protects certain individuals' privacies.  Releasing the raw video data would violate that law.  The SPD is then stuck with the task of manually auditing and redacting thousands of hours of video.  Unlike some other departments, Seattle has taken a very positive and community friendly approach to the problem: they asked the public for help.  SPD held a hackathon to develop and demonstrate technology that automatically redacts video.

At the hackathon, I saw demonstrations of some truly impressive software.  Video analysis algorithms are surprisingly advanced and their application to this problem is very promising.  However, I noticed a trend in the applications that really bothered me.  The applications were run directly on the raw video and produced a redacted video as the output.  After talking things over with my colleague, I think we are missing a very important step.  I believe that this problem should be broken into two separate problems.  First is the detection of content that should be redacted.  This is the really hard and interesting problem; people spent most of the hackathon talking about and working on it.  The other problem is then taking that data (what to redact) and applying that to the raw video.  While this may not seem to be a meaningful separation, it enables a lot of flexibility.

One of the main recurring themes at the hackathon was the fallibility of the algorithms (and even human auditors).  There is a lot of information contained in a video and we are bound to misidentify a person or misclassify a frame.  The problem is that you can't just say "oops" when you violate someone's right to privacy.  This means that the thresholds on the "should this be redacted?" algorithms must be very conservative.  As anyone with machine learning/classification experience will tell you, this will cause a lot of false positives. The general approach, as I understand it, is to be very conservative to protect people's privacy but allow for a more manual redaction and audit in the case that something "interesting" is in the video.  This will allow for full disclosure in a “more-redacted-than-required” form (keeping the police department from breaking the law), but allow the public to request minimal redaction of the videos they are interested in.

I believe that the technology should not only help with the initial, conservative redaction, but also enable the public to easily request the minimally redacted videos.  I also believe that the technology should help the police auditors to easily determine what should or should not be redacted.  This is where breaking the problem into two steps is extremely powerful.  Let me describe my proposed system.

1) Raw video recordings (RAW) are run through the (very conservative) redaction algorithms.
2) A redaction data file (RDF) is created as the output (this is *not* the redacted video).
3) The RAW and RDF are used as inputs to a second program whose only job is to apply the redactions to the video.
4) A redacted video file (RED) is created as the output.
5) The RED and the RDF are made available to the public.

The RDF can serve as a way to talk about the redactions. RDF should be made available so that any inquiries related the video can have common references for the redactions. On the auditing side, the RDF can be combined with RAW to allow auditors to see in-context what has been redacted and allow them to remove or apply a set of redactions.

Imagine visiting the SPD video archives website and finding a video you're interested in.  Viewing that video in the browser, you see that the video has the officer's face redacted during a 5 minute traffic stop (in a 4 hour video).  In Washington state, the officer's identity is not protected by privacy laws.  You click the blur (the redaction) bringing up a context menu, and click the "request redaction removal."  This sends an automated request for review of just that redaction to SPD.  Since the reviewer has an exact description of the redaction built in to the system, it is easy to review and approve.  A few minutes later, you get an automated email saying that your request has been reviewed and accepted with a link to the video.

Compare this scenario to what it looks like without the redaction data file.
From the SPD video archives site, you find a redaction you want removed.  Since the redactions are not made available, you cannot click on a specific redaction in a video.  Instead, you report the entire video for review.  When reporting, you must be sure to specify where in the video you want the redaction removed ("1:35:12 through 1:40:33") and what you want removed ("I think the officer's face is redacted").  This sends an automated request for review of the video clip to SPD.  Without the automated description, the reviewer must read the description in the request and find the redaction referenced before reviewing and approving it.  A few hours later, you get an automated email saying that your request has been reviewed and accepted with a link to the video.

It's obvious that not having a redaction data file increases the burden on the requestor (they must include information about when the redaction appeared and what must be removed), but it can be much worse.  If the clip in question has multiple fields of redaction, the requestor may not be clear on which field is in question.  The requestor may not include information required to review the request.  And most importantly, responding to the request takes much more time and effort from an SPD reviewer.

As good as it is for the review process, the RDF will have the most impact to the algorithms.  Having the RDF allows for a record of what should, and should not be redacted in a structured way.  Being able to signal to the algorithm what is "good" and "bad" greatly expands the capability of the algorithm.  In machine learning lingo, this is called a supervised learning algorithm.  Every single video created is essentially able to teach the algorithm what should and should not be redacted.  Without a way to reference a redaction, this kind of feedback to the algorithm becomes incredibly difficult.

In my next post I will talk about my proposed data format and give an example of what I envision for a review UI.