Skip to main content

I'm quickly coming to the opinion that, if you're doing deep-embedded development in anything except assembly language (or some other language with assembly's level of abstraction), you're just doing it wrong. Full fucking stop.


Two weeks and counting debugging this fucking live-lock bug with this UART, and Rust's utter inability to cooperate with me when debugging is just straight up infuriating.
don't have much experience with microcontrollers, but ended up debugging low-level stuff multiple times and usually I was looking at disassembly and single-stepping instructions regardless of what language the software was written it.

The only way Rust could make it more difficult that I can think of is by spitting out horrible instructions that make the disassembly unreadable. But unreadable assembly is also usually suboptimal so I think they'd try to not do that...
If you use a normal debug build, that is indeed the case. However, because we are optimizing for size, a lot of size optimization takes place before generating the final assembly result. As a result, the resulting assembly language listing bears relatively little relationship to the original source code listing.
I remember once hunting down a bug in some embedded code I'd written in C. It turned out that the compiler was getting the order of operations wrong (evaluating || before &&). Using parentheses to make it explicit corrected the problem.
Yes. Very, very, very yes.

Nearly all Forth systems have source code which maps 1:1 with the produced object code. Even those systems which do not have exact 1:1 correspondance, the relationship between the generated assembly language and the original Forth code is usually obvious upon visual inspection.

In either case, Forth code is significantly easier to debug both interactively and in batch.
Asking as a Forth noob: How do you debug Forth? Do you run it in an emulator and step through it? Or add asserts and prints?
This depends heavily on which forth environment you're working with. I cannot give specifics. However with swift-X, you can compile and run individual words and interrogate and altar variables, and so forth, exactly as if it were a host compiled program. So even though you're working on the target, the user interface is exactly the same as if you were programming on the host.
@Vertigo #$FF Ah, this one: "SwiftX is an interactive development environment (IDE) and cross compiler for developing code to run on microcontrollers and microprocessors."

So you can have something like a REPL on the desktop machine, but the code is interactively compiled and run on the device you're coding for? Sounds awesome.

@Csepp 🩸 @Josias
I recall that the architecture of the PDP-11 was such that it made it fairly easy to write directly in machine codes and without assembly language.
12724 123 -> MOV #,(R4)+

That was because the codes strictly obeyed a very few rulesπŸ˜€

Yeah, I know, it's old way, it's considered almost indecent now😜
Working with Rust would be OK if I had a target environment that fully supported a debug-mode build of the binary. But, since this is a small microcontroller with very limited flash, that is just not possible for our current environment.

I'd argue, based on this, that Rust is excellent for application development, or maybe large-scale kernel development. But, it's not ready for deep embedded work yet. It's just doesn't support debugging under such tight constraints.
I thought for embedded profiles the debug information was split into a separate file (and not included in the binary). Otherwise this should be fixable with
I would hope that the debug info has nothing to do with the size of the final executable that is put on the device. If so, then something is very wrong.

I'm guessing a debug build doesn't turn on optimizations and the binary is too big to fit on the device. Is that accurate?

When I tinkered with Rust at work, the debug build produced code that was huge. If you could get optimized code with debug info, maybe that would help. I don't know how to do that, though. If it were me, I'd be looking into how you control what options are passed to the compiler.

(Incidentally, I work on a C/C++ complier for embedded systems at my day job and I'm gently pushing for us to start supporting Rust, so your posts on this are helpful to me. Thanks!)
Is that accurate?
Exactly! In order to make everything fit into the flash right now, we need to have all optimizations turned on. So, for example, a lot of the lines of code in the original source is reported by GDB as being optimized out, making it useless for interactive debugging purposes. A lot of lambda expressions are also optimized out, making it nearly impossible to fully debug without making changes to the source code. Etc.
Okay, that's good to know. I don't have a solution for you, but that matches with the experience of a lot of our customers have with optimization in general: they get lost when debugging. (Even though the code is really quite good most of the time.)

I haven't been able to figure out a good way to handle this. I have found debugging optimized code outside of embedded to be not much of a problem. I haven't had to debug optimized code on a device in any serious way so it's hard to compare.

I'm always wondering what the difference is and haven't been able to crack that nut yet.
Yeah, debugging optimized code sucks.

I've never done this myself, but I've heard you can not optimize your own code, but you can optimize all other packages (that you are hopefully not debugging) for size.
@vertigo @yrabbit
30yrs since I did any embedded systems stuff and ye gods I'm enjoying reading what you write. Wishing you all strength to your debugging hammer
b-but I was informed decades ago that Java would solve this
haha yeah I was being snarky πŸ˜„

it's nice that it ever worked out on any platforms at all I guess ... man I haven't written assembly in over a decade
I've done a little 6502 (8502) and a little 68k and they read like a programming language.

I haven't seen ARM, but I was under the impression that a modern RISC like it would have asm uncomfy for humans. Seems I was wrong?
This entry was edited (8 months ago)

This website uses cookies. If you continue browsing this website, you agree to the usage of cookies.