Finally: Some Shareable Code

(On its way…)

I’ve had code to control the ADAU1701 going back to 2013, but it was never anything I wanted to share. The code started out as assembly code for a 68HC08 microprocessor to control the TAS3004 DSP chip, and then later it was updated to control DSP portion of the STA328 and STA309 amplifier chips. When the ADAU1701 came along, I updated this code to communicate with the ADAU1701, and then starting in 2016 I finally converted this code to C for the Arduino development tools.

That code was still architected in a way that made sense for assembly code, but it was not good C code. It was modular in a way that supported different types of DSP chips, but it made extensive use of global data structures so I could keep track of the small amount of RAM available in those microprocessors, and it relied heavily on look-up tables generated by Excel to keep arithmetic calculations to a minimum. I could maintain it, but I knew that nobody else could, so I didn’t want to share it. The embedded computing world has changed dramatically in the last 5 years, and each iteration of new hardware was adding new features. I was starting to get lost in all the unique revisions, and it was past time for a major refactoring of this code to make it more modular and more maintainable. Late last year I put hardware development on hold until I could make the software more manageable.

When I started re-writing this code, I wasn’t aware that there were some other efforts to develop an ADAU1701 library. Just recently someone pointed out the good work by the AidaDSP team and some enhancements by MCUDude. If I had known about these efforts I would have reused that code for the lower layers of the ADAU1701 library. It wouldn’t have saved any time by using those libraries–it’s just nice to have some “standard” tools and conventions. Also, that other work only addresses the “lower” layers of the ADAU1701 software, as I’ll show later on.

The Layers

I find this layered view of the software a useful way to visualize this code:

At the lowest layers, the interactions are with the chips, using the standard Wire library. The next layer is what I call “I2C”, as all of the functions require communication with the DSP or other devices using the I2C bus. The functions defined for the this layer include:


These functions overlap the “low level functions” in the AidaDSP library, with the exception of the “Load Program” and “Load Parameter”. I prefer using the microprocessor to load the code into the ADAU1701 rather than using the self-boot capability, so these additional functions are needed.

DSP Layer

The next layer up includes the “DSP” functions. This layer includes the filter, volume and crossover calculation functions, plus a long list of support functions. This layer is also where the filter data is converted to the 5.23 representation specific to the ADAU1701. The DSP functions in this layer are all audio-oriented, whereas the library from MCUDude focuses on signal-generator functions, so at this level the libraries start to diverge.

Cell Map

Now for that mysterious orange bar (remember that old software pick-up line: “what’s a nice layered architecture doing with a bar like this?”). The orange bar labeled “Cell Map” solves the problem that MCUDude discusses in the, in the section about the parameter generator script. Like several other new DSP chips from Analog Devices or TI, the ADAU1701 doesn’t have a fixed address architecture, where there are dedicated data registers for volume, filter or mux or generator cells. Instead, the Sigma Studio compiler assigns Parameter RAM addresses dynamically when the program is compiled. The only way we can find out the address we need to write to for a volume change or for changing filter parameters is to look at the files generated by the compiler. You can do this manually, of course, by reading those text files, but every time you make a change to the SigmaStudio design, the compiler can assign a totally different address to the cells. For example, your volume control might use Parameter RAM address 0020 in one iteration of your SigmaStudio design, but after adding some other component, the volume control might get assigned 0021, or some other address. So, we need some tools to create a Cell Map that can ingest the SigmStudio compiler files and generate some declarations or executable code for our Arduino program. The Cell Map file looks like:

const word Source_Sel = 0;
const word SW_vol_1 = 4;
const word EQ_50 = 6;
const word EQ_80 = 11;
const word EQ_300 = 16;

. . .

const word Rumble_L = 225;
const word Rumble_R = 230;
const word Tweeter_Pol_L = 235;
const word Tweeter_Pol_R = 240;

I generate the Cell Map file using a simple program written in–a working preliminary version is available for download at this link. MCUDude took a somewhat different approach to creating this mapping–he used a Powershell or Bash script to parse the SigmaStudio compiler files to extract the Parameter RAM addresses. The end result is the same: a header or executable file with a mapping from the name of the cell in your code to its address on the I2C bus. I also process the file with the SigmaStudio code so the micro can load that data into the Program RAM. That way, there is no need for a self-boot EEPROM or a SigmaStudio programmer–in fact, I don’t even own one of those programmers.

The cell map also has some information that is needed for controlling audio functions that are linked or that span multiple cells. For example, it is convenient to work with left and right audio channels in the same function, and features such as multiband equalizers and high order crossovers require multiple biquad cells to implement. These groupings are also defined in the same Cell Map file:

word Rumble_filter_addresses[2] = {Rumble_L, Rumble_R};
word Peak_filter_addresses[2] = {Peaking_L, Peaking_R};

      .  .  .

word EQ_address[9] = {EQ_50, EQ_80, EQ_300, EQ_600, EQ_900, EQ_2K, EQ_5K, EQ_8K, EQ_13K};

Command Layer

The Command Layer provides a high-level wrapper for the DSP functions. Each of these Command Layer functions maps to a command that is exposed via the HCI. The header for the Command library looks like this:

void EQ_80_Gain(int reg_num, int value_code);
void EQ_300_Gain(int reg_num, int value_code);
void EQ_600_Gain(int reg_num, int value_code);
void EQ_900_Gain(int reg_num, int value_code);
void EQ_2K_Gain(int reg_num, int value_code);
void EQ_5K_Gain(int reg_num, int value_code);
void EQ_8K_Gain(int reg_num, int value_code);
void EQ_13K_Gain(int reg_num, int value_code);
void EQ_reset_Gain(int reg_num, int value_code);
void Rumble_filter_freq(int reg_num, int value_code);
void Rumble_filter_Q(int reg_num, int value_code);




This layer still requires “knowledge” of the Parameter RAM address (“reg_num”), and it requires a “value code”, which is a valid menu item. These menu items are described in the next section.

HCI Layer

This code has always included an HCI layer, even for the old assembly language versions. The HCI is implemented as a system of state machines, in which the current state of the DSP is stored in EEPROM, and the primary inputs are “Next” and “Previous” codes. There is a table that defines the allowable states for each menu category, as well as the corresponding response. The responses form the “output vocabulary” in the classic definition of a Moore machine. The HCI also allows jumping directly to a specific item in the menu.

Moore machine–from Wikipedia

An example might help make this HCI implementation clearer. Let’s look at the delay command, which is implemented the same way as all the other commands. This code supports 3-way designs, so there are 3 delay state machines, one for each stereo channel (tweeter, woofer or subwoofer). The delay command has 12 allowable states, so the value codes will go from 0 to 11. Assume the current state for the active channel is 0, which corresponds to a delay of 0 inches. If the input is “next”, the state will transition to the next state, which is 1, and the output logic will look up the value in a table that will get sent to the ADUA1701 delay cell. It will also look up the string response that gets returned to the command interpreter, which in this case is 0.28 (the delay in inches, corresponding to a delay of one clock cycle). The updated state is then stored in memory.

This HCI implementation is easy to maintain because all possible states and the responses are in tables. The client application doesn’t need to know anything about the DSP–it just needs to send codes that select the right state machine, along with “next” and “previous” commands. The microprocessor controlling the ADAU1701 keeps track of the state of each DSP function, and it tells the client application what to display. Adding new states is easy–just add more entries to the tables, and the client doesn’t need to be changed. The client doesn’t need to convert numbers or know anything about what is going on in the DSP: it just needs to send the right command strings and display the string that comes back.

The HCI layer also includes the command interpreter, which can accept commands from a number of different types of clients. The clients can be a cell phone app, web page, MQTT client or serial/USB port–I’ve got code for a number of different protocols. The Bluetooth interface that I have right now is “classic” rather than BLE, but BLE is on the to-do list.

The command “payload” is defined in another article on this site (see Article 12). The payload is simply a command code that specifies the state machine, an optional sub-code, and an action code that signifies either “next”, “previous”, or “go to value x”, where x is one of the allowable states. I’ll provide a summary of the commands in another article or as an update to this one.


As of April, 2021, the lower level library functions are working, but the commands are still getting converted from the old code to the new. There is nothing “difficult” left to do, but there is still a lot of tedious refactoring. And there will still be a lot of “clean-up” before this will be posted. It feels a bit odd to spend so much time on an effort where there is no new functionality. But getting this code in better shape will allow sharing the code, which is something I couldn’t do before.

Upcoming Enhancements

Right now I have different “flavors” of the code for different control devices. There is a Bluetooth classic version, an MQTT version, and the original serial/USB version, along with versions that use the Nextion LCD touchscreen display and rotary encoders with discreet LCD displays. All of them are similar, in that the I/O gets converted to a common set of commands/responses, but there is no easy way to switch between different control types. So I need to redesign how the code supports multiple control types and factor the code into libraries that can be included with the sketch.

Another enhancement is an HCI tool that allows designing the HCI tables with graphical tools. This tool would allow defining all of the DSP states and generate the corresponding values for the DSP along with the responses. The tool would then output header files that could be compiled with the Arduino sketch. Back in the assembly code days, I would lay out the menus and their options in Excel and manually transfer that information to “DB” tables. A modern tool to automatically generate those tables would be nice.

And there is still a lot of work to make the code more “debug-friendly”. The Arduino IDE doesn’t provide breakpoints and variable watches like you can get in more advanced IDE’s. It would help to have some well-chosen debug flags that would provide more insight into code operation using the serial monitor or the ESP32 OLED display.

A final area for code development that comes to mind is a “native” app for IOS. I’ve got a nice user interface written for Android using the Navigation View component, but nothing comparable that would run on an iPhone. I’ve done some work with Thunkable for a different project, but I’ve only tested their Android code generator. Thunkable will supposedly generate iPhone code, but it only supports BLE Bluetooth and the controls are somewhat “clunky”. An app built with the IOS tools would look nicer, and it would allow more advanced features like graphing to display the modeled response.