Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Analysis is nice, although the graph style is very much 2005. The conclusion is that as long as you don't get a crappy switch, 10mS debounce interval should be sufficient.

I would not pay much attention to the rest of the text.

The hardware debouncer advice is pretty stale - most of the modern small MCUs have no problem with intermediate levels, nor with high frequency glitches. Schmidt triggers are pretty common, so feel free to ignore the advice and connect cap to MCU input directly. Or skip the cap, and do everything in firmware, MCU will be fine, even with interrupts.

(Also, I don't get why the text makes firmware debouncer sound hard? There are some very simple and reliable examples, include the last one in the text which only takes a few lines of code.)



> Also, I don't get why the text makes firmware debouncer sound hard?

The article links to Microchip's PIC12F629 which is presumably the type of chip the author was working with at the time.

This would usually have been programmed in assembly language. Your program could be no longer than 1024 instructions, and you only had 64 bytes of RAM available.

No floating point support, and if you want to multiply or divide integers? You'll need to do it in software, using up some of your precious 1024 instructions. You could get a C compiler for the chips, but it cost a week's wages - and between the chip's incredibly clunky support for indirect addressing and the fact there were only 64 bytes of RAM, languages that needed a stack came at a high price in size and performance too.

And while we PC programmers can just get the time as a 64-bit count of milliseconds and not have to worry about rollovers or whether the time changed while you were in the process of reading it - when you only have an 8-bit microcontroller that was an unimaginable luxury. You'd get an 8-bit clock and a 16-bit clock, and if you needed more than that you'd use interrupt handlers.

It's still a neat chip, though - and the entire instruction set could be defined on a single sheet of paper, so although it was assembly language programming it was a lot easier than x86 assembly programming.


That chip has a 200ns instruction cycle though. Whatever program you're running is so small that you can just do things linearly: i.e. once the input goes high you just keep checking if it's high in your main loop by counting clock rollovers. You don't need interrupts, because you know exactly the minimum and maximum number of instructions you'll run before you get back to your conditional.

EDIT: in fact with a 16 bit timer, a clock rollover happens exactly every 13 milliseconds, which is a pretty good denounce interval.


Sure! I'm not saying debouncing in software was impossible.

But a person working on such resource-constrained chips might have felt software debouncing was somewhat difficult, because the resource constraints made everything difficult.


This is basically the answer.

Note that a lot of the content that Jack posted on his site or in the newsletter was written years, if not decades ago in one of his books or when he was writing for "Embedded Systems Programming" magazine. He was (completely retired last year) pretty good about only reposting content that was still relevant, but every so often you'd see something that was now completely unnecessary.


You've read the article, right? None of the code author gives need "64-bit count of milliseconds" nor floating-point logic.

The last example (that I've mentioned in my comment) needs a single byte of RAM for state, and updating it involves one logic shift, one "or", and two/three compare + jumps. Easy to do even in assembly with 64 bytes of RAM.


Do you mean this code, from the article?

  uint8_t DebouncePin(uint8_t pin) {
      static uint8_t debounced_state = LOW;
      static uint8_t candidate_state = 0;
      candidate_state = candidate_state << 1 | digitalRead(pin);
      if (candidate_state == 0xff)
          debounced_state = HIGH;
      else if (candidate_state == 0x00)
          debounced_state = LOW;
      return debounced_state;
  }
That doesn't work if you've got more than one pin, as every pin's value is being appended to the same candidate_state variable.

The fact the author's correspondent, the author, and you all overlooked that bug might help you understand why some people find it takes a few attempts to get firmware debouncing right :)


I don't think anyone is overlooking anything, because it should be pretty clear this code is a template, meant to be modified to fit the project style.

In particular, that's not in assembly (we were talking about assembly), and uses arduino-style digitalRead and HIGH/LOW constants, which simply do not not exist on PIC12F629 or any other MCUs with 64 bytes of RAM. Translating this to non-Arduino would likely be done by replacing digitalRead with appropriate macro and removing "pin" argument.

But if you want to talk more generally about atrocious state of firmware development, where people are just copy-pasting code from internet without understanding what it does, then yeah... there seems to be something in firmware development which encourages sloppy thinking and wild experimenting instead of reading the manual. I've seen people struggle to initialize _GPIO_ without the helpers, despite this being like 2-3 register writes with very simple explanations in the datasheet.


>No floating point support, and if you want to multiply or divide integers? You'll need to do it in software, using up some of your precious 1024 instructions.

Very much not true as almost nobody ever used floating point in commercial embedded applications. What you use is fractional fixed point integer math. Used to be working in Automotive EV motor control in the past and even though the MCUs/DSPs we used had floating point HW for a long time now, we still never ued it for safety and code portability reasons. All math was fractional integer. Maybe today's ECUs started using floating point but that was definitely not the case in the past, and every embedded dev wort his salt should be comfortable doing DSP math in without floating point.

https://en.wikipedia.org/wiki/Fixed-point_arithmetic

https://en.wikipedia.org/wiki/Q_(number_format)


Plenty of embedded microcontrollers in the 70s and later not only used floating point but used BASIC interpreters where math was floating point by default. Not all commercial embedded applications are avionics and ECUs. A lot of them are more like TV remote controls, fish finders, vending machines, inkjet printers, etc.

I agree that fixed point is great and that floating point has portability problems and adds subtle correctness concerns.

A lot of early (60s and 70s) embedded control was done with programmable calculators, incidentally, because the 8048 didn't ship until 01977: https://www.eejournal.com/article/a-history-of-early-microco... and so for a while using something like an HP9825 seemed like a reasonable idea for some applications. Which of course meant all your math was decimal floating point.


>Plenty of embedded microcontrollers in the 70s

Weren't those just PC computers and less microcontrollers?


No, though chips like the 80186 did blur the line. But what I mean is that different companies sold things like 8051s with BASIC in ROM. Parallax had a very popular product in this category based on a PIC, you've probably heard of it: the BASIC Stamp.


Intel 8052AH-BASIC. I loved the manual for that chip! Written with a sense of irreverence that was very unlike Intel.


You were talking abut microcontrollers from the 1970s, then you bring up Parralax as an example.


No, that was your edit to my text. I said, "in the 70s and later". You removed the "and later". You are behaving badly.


Are you saying that it isn't true that there was not floating point support? That there actually was, but nobody used it? I don't see how that changes the thrust of the parent comment in any significant way, but I feel like I may be misunderstanding.


No. They're saying that instead of floating-point, fixed-point math was used instead. Floating point hardware added a lot of cost to the chip back then and it was slow to perform in software, so everyone used integer math.

The price of silicon has dropped so precipitously in the last 20 years that it's hard to imagine the lengths we had to go to in order to do very simple things.


>Are you saying that it isn't true that there was not floating point support?

NO, that's not what I meant. I said you didn't need it in the first place anyway since it wasn't widely used in commercial applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: