So recently I’ve been playing around with the NESDR Smart USB dongle and using it to explore the airwaves around my apartment. This was a project born of one part curiosity in the fun waterfall graphs I saw at DEFCON and another part concern for a few friends who work EMS in the city. In trying to set everything up I discovered that to understand most of that traffic you have to actually decode it from the digital voice format it is sent in, and that’s where my problems started.

I happen to run ArchLinux on my desktop which made the best Windows tutorials I found pretty much useless and the Linux based ones focused on either Kali or Ubuntu. Thankfully a lot of the instructions carry over so I start by installing gqrx and dsd from the AUR. I also make sure I have access to pavucontrol and padsp which are called out in multiple places to help overcome dsd’s reliance on the OSS sound system on Linux. Theoretically, dsd can use PortAudio for cross platform audio support but the package I was able to build didn’t seem to even provide a -a flag to list available audio out devices. Since my system only runs PulseAudio we will need those tools to emulate OSS and pass the audio stream over to PulseAudio to actually play.

Except that it couldn’t. For whatever reason padsp would or could not create the /dev/audio nodes that dsd expected. I hunted through the logs for hours but couldn’t find anything on the error the service was throwing. Time for a workaround!

Since I couldn’t route my audio directly with PulseAudio it was time to get creative with the functionality I did have access to. One of the more interesting features of gqrx is the ability to stream raw audio over the network with UDP. So I found a likely frequency and set gqrx to use UDP on port 7355 to stream the audio. From my reading and testing Narrow FM mode with a normal filter width and AGC turned off has given the best (if currently limited) results. I also knew that while it will complain about the missing dev nodes, dsd will happily start up with arbitrary input and output file names. With those pieces I was able to stick together the following script.

#!/bin/bash

WORK_DIR=$(mktemp -p "${TMPDIR:-.}" -d sdr-decode-XXXX) || exit 1
NC_OUTPUT_PIPE=$WORK_DIR/nc_output
DSD_OUTPUT_PIPE=$WORK_DIR/dsd_output

GQRX_UDP_PORT=7355

trap handle_exit INT

function handle_exit() {
  rm -r $WORK_DIR
}

mkfifo $NC_OUTPUT_PIPE
mkfifo $DSD_OUTPUT_PIPE

nc -l -u localhost $GQRX_UDP_PORT > $NC_OUTPUT_PIPE &
dsd -i $NC_OUTPUT_PIPE -w $DSD_OUTPUT_PIPE -fa -ma &
cat $DSD_OUTPUT_PIPE | aplay

We first create two FIFO files for our nc and dsd output to write to and set up nc to pull the data off of the network. We’re operating under the assumption that gqrx and nc are running on the same machine, but if you aren’t adjust accordingly. The decoder is set up to automatically handle as much different traffic as possible since I’m currently just learning about the various digital voice modes. If you have a specific mode in mind you can look at the dsd wiki for more information on what flags to use.

The last line is also fairly important if you want to keep the script running so you can listen as you hop around frequencies. If you just pipe the output of dsd directly into aplay the script will terminate as soon as whatever voice transmission you have tuned ends. With cat pulling data from the FIFO we keep the pipe into aplay open indefinitely. In fact, the only way to terminate the script is with Ctrl-C. Once you break out of the script it is configured to clean up after itself so you don’t junk up your system with leftover files.

I’m still working on getting clear audio out of the connection with my limited equipment/experience but it does work! I am finding that if dsd reports a inlvl of 20% plus you can hear what sounds like voices but nothing you can make out as words.