-
Notifications
You must be signed in to change notification settings - Fork 39
Architecture
securedrop-client
is a Qubes-based desktop application for journalists using SecureDrop. It runs on SecureDrop Worksation in a non-networked VM and launches other VMs as needed in order to securely communicate with sources and handle submissions.
It uses the securedrop-sdk, which is an API client for the SecureDrop journalist API. When used in Qubes, the SDK uses the securedrop-proxy, which runs in a VM with network access.
sd-app AppVM <- RPC (via qrexec) -> securedrop-proxy in sd-proxy AppVM
-
app.py
: configures and starts the SecureDrop Client application
-
db.py
: contains our database models
-
main.py
: contains theQMainWindow
class, which sets up the main application layout and connects other application widgets to the Controller -
widgets.py
: contains all application widgets except forQMainWindow
-
logic.py
: contains theController
class, which syncs the client with the server, creates and enqueues jobs on behalf of the user, and sends Qt signals to the GUI
-
storage.py
: contains functions that perform operations on the local database and filesystem
-
sync.py
: contains theApiSync
class, which continuously runs a background task to sync with the server, and theApiSyncBackgroundTask
class, which contains the background task to sync with the server.
-
queue.py
: contains theRunnableQueue
class, a wrapper around Python's priority queue, and theApiJobQueue
class, a queue manager that manages twoRunnableQueue
s on separate threads: one for downloading file submissions and one for everything else
-
base.py
: contains job interfaces that provide exception handling and a way to signal back to the controller whether or not a job was successful
The ApiSync
class manages a thread to continuously sync with the server. It schedules a sync only when a sync completes and begins the first sync when the start
method is called with an auth token.
If a sync is successful or fails because of a RequestTimeoutError
, then another sync is scheduled to run 15 seconds later. If a sync fails because of any other error, no new sync is scheduled and the start
method must be called again with an auth token.
There is also a stop
method that is used to stop continuous syncs, for instance when switching to offline mode.
The ApiSyncBackgroundTask
class only contains a MetadataSyncJob
and a method called sync
to run this job whenever it's called. This class connects the MetadataSyncJob
's success and failure signals to callbacks provided by ApiSync
during initialization.
The ApiJobQueue
class manages two threads running separate queues: one that enqueues FileDownloadJob
s and one that enqueues other user-initiated jobs (SendReplyJob
, DeleteSourceJob
, and UpdateStarJob
) as well as sync-initiated jobs (MessageDownloadJob
and ReplyDownloadJob
).
ApiJobQueue
starts the queues when its start
method is called with an auth token to ensure jobs are able to make their requests. It stops the queues when the stop
method is called. If the queues had to pause, that is if they returned from their processing loop because of too many RequestTimeoutError
s, then processing is restarted when its resume_queues
method is called.
Each queue processes one job at a time. They are implemented via the RunnableQueue
class. Each RunnableQueue
contains a queue
attribute which is simply Python's priority queue implementation (queue.PriorityQueue
). This is used to prioritize more important jobs over others. One of the quirks of Python's Priority queue is that it does not preserve FIFO ordering of objects with equal priorities. A counter was added to our job objects to ensure that the sort order of objects with equal priorities is stable.
We have several jobs, in order of priority highest to lowest:
-
PauseQueueJob
- pauses the queue when network timeouts occur -
FileDownloadJob
- downloads files, processed in a separate queue where onlyPauseQueueJob
can also be added -
DeleteSourceJob
- deletes a source -
SendReplyJob
- sends a reply to a source -
UpdateStarJob
- updates a source star (to starred or unstarred), which is used to indicate interest in a source -
MessageDownloadJob
,ReplyDownloadJob
- downloads messages and replies, which have the same and lowest priority since these are not user-initiated jobs
RunnableQueue
maintains a priority queue and processes jobs in that queue. It continuously processes the next job in the queue, which is ordered by highest priority. Priority is based on job type. If multiple jobs of the same type are added to the queue then they are retrieved in FIFO order.
If a RequestTimeoutError
or ServerConnectionError
is encountered while processing a job, the job will be added back to the queue, the processing loop will stop, and the paused signal will be emitted. New jobs can still be added, but the processing function will need to be called again in order to resume. The processing loop is resumed when the resume signal is emitted.
If an ApiInaccessibleError
is encountered while processing a job, the auth token will be set to None
and the processing loop will stop. If the queue is resumed before the queue manager stops the queue thread, the auth token will still be None
and the next job will raise an ApiInaccessibleError
before it makes an API call, which will repeat this process.
Any other exception encountered while processing a job is unexpected, so the queue will drop the job and continue on to processing the next job. The job itself is responsible for emiting the success and failure signals, so when an unexpected error occurs, it should emit the failure signal so that the Controller can respond accordingly.
There is consistent ordering of replies (from multiple journalists), messages, and files, set by the arrival of messages on the SecureDrop server, and clients synchronize to this.
For pending or failed replies, they are stored persistently in the client database and are considered drafts. Draft replies contain:
- a
file_counter
which points to thefile_counter
of the previously sent item. This enables us to interleave the drafts with the items from the source conversation fetched from the server, which do not have timestamps associated with them. - a
timestamp
which contains thetimestamp
the draft reply was saved locally: this is used to order drafts in the case where there are multiple drafts sent after a given reply (i.e. whenfile_counter
is the same for multiple drafts).
On the client side, the order of a conversation item within the conversation view (i.e. the widget index) reflects the order of the item within the source's conversation collection, which is ordered first by file_counter
(for draft replies, messages, files, and replies), and then by timestamp
(used for draft replies only).
Recapping, the relevant attribute for ordering are:
- for
Reply
objects (successful replies):file_counter
- for
File
objects:file_counter
- for
Message
objects:file_counter
- for
DraftReply
objects (pending or failed replies):file_counter
, and thentimestamp
. For a draft, it's initialfile_counter
is set to the current value ofsource.interaction_count
.
The maximum file_counter
for all items associated with a given source is equal to the Source.interaction_count
for the source.
Example of reply sending:
-
Reply
Q, hasfile_counter=2
- User adds
DraftReply
R, it hasfile_counter=2
- User adds
DraftReply
S, it hasfile_counter=2
andtimestamp(S) > timestamp(R)
. -
DraftReply
R is saved on the server withfile_counter=4
(this can happen as other journalists can be sending replies), it is converted toReply
R locally. - We now update
file_counter
onDraftReply
S such that it appears afterReply
R in the conversation view.