Of course I wouldn't vibe code in a serious production project, but I'd
still use an AI agent, except I'd make sure I understand every line it
puts out.
So you value your ability to churn out insignificant dreck over the ability of others to use the internet? Because that's the choice you're making. All of the sites that churn your browser for a few seconds because they're trying to block AI DDoS bots, that's worth your convenience on meaningless projects? The increased blast radius of Cloudflare outages, that's a cost with foisting on to the rest of the internet for your convenience?
This is such a... unique angle. Of all the things to get angry at AI for, web crawlers and the impact on cloudflare outages are the ones that really grinds your gears?
The difference is context: that link is a service saying they're having to make some changes, because of AI crawlers.
The context here is it's a HN post about someone being excited about AI bringing them back to coding, in response to a comment on that HN post with someone else being excited about the useful things they've been able to do about AI.
In that context, it's a unique choice to respond with a load of confrontational and rhetorical questions about AI being bad because crawlers and cloudflare outages. It reads like the they just wanted 2 excited programmers to feel bad about themselves for using AI. It's not really the sort of response I'd expect here.
The Rust runtime will, at a minimum, set up the stack pointer, zero out the .bss, and fill in the .data section. You're right in that a heap is optional, but Rust will get very cranky if you don't set up the .data or .bss sections.
Levis stuff has been made overseas for decades now. It's only with the more recent shift towards using cotton blends in nearly of their jeans that the longevity has suffered.
Right, but you're not really competing on processor speed. You're competing on maturity of peripherals where the RP doesn't really match up PIO or not.
Edit: I see you're comparing it to the 3.2 but I suspect most folks are going to be comparing your offering to the 4.x.
Yeah - I don't really consider this comparable for my uses which rely heavily on the DSP and processing power of the Teensy itself either.
Drama and whatnot aside I'm not really sure why anyone would buy the (considerably more expensive) Teensy over something RP based if RP was suitable for their needs already.
Interestingly despite being a Teensy fan I have found myself reaching more towards the RP when I can because I can't stand the Arduino API and much prefer the RP SDK. I do use Teensy without Teensyduino (Makefile based) and also a bit of the CMSIS-DSP stuff directly - but it's kinda clunky IMO.
I've been interested to hear more about use cases for these "hybrid" MCUs, can you share a bit about why you chose that over something like a Cortex-A running linux, or an SoC with -A and -M cores?
It's a good question - unfortunately I don't really have a good answer...
Almost all of my embedded activities are for a my own hobby purposes, and I just like the ability to go 'as low as I can' with projects on MCUs. It's nice to be able to use the device's peripherals as much as possible (hardware DSP etc) and I'm not confident in how I'd do that on a Linux based system. I'm in to building my own ham radio Software Defined receivers and it's nice to keep it completely real time.
If I were to be doing this stuff professionally (and I am very close to people who do at work) then yeah I'd probably be using Zephyr or something.
Ah interesting! I work on (very expensive) SDRs and we make pretty heavy use of Xilinx Zynq Ultrascale SoCs. They combine Cortex-A, Cortex-R, and FPGA fabric all in one package, with some fancy interconnects. So you can handle the hard realtime stuff on an RTOS or in the FPGA, then send the data over to the application processor with a hard float ALU to crunch some numbers (or build some kind of dsp IP into the FPGA, idk much about that side of it).
I've also seen some cool stuff with the BeagleBone products, which have a few TI custom architecture DSPs and "realtime units" which you can communicate with via Linux.
But yeah, I can certainly see how just doing it all on a super fast MCU could be easier and cheaper without the backing of commercial enterprises.
I've always thought it would be cool to design a "poor man's zynq" hat for a SBC. Stick a RP3050 and a Lattice FPGA on there and set up some SPI / UART connections.
it will have benefits over the 4.x - we can always spin up a version with the iMX chipset (we have a metro board with the little sister chip, iMX RT1011 already in stock) - tbh if we did something with the iMX RT106x we'd probably start with a Metro (Arduino-shield compatible) or Feather board since that's a super-popular pinout.
either way, more hardware is better and we don't want to just give people the same-old-same-old... as we mentioned there's lots of things that we can add to make the board useful to people: SWD, USB C, Lipoly batt, onboard storage, neopixel LED, etc). what peripheral/library are you specifically concerned about?
Mostly I'm just leery of software defined peripherals being at the mercy of whatever community springs up around them, nothing specific. In terms of a Metro then yeah, something to slot in where the Due was absolutely with high speed USB, 10/100 ethernet, CAN FD, and all that jazz that wouldn't work on a $10 board. A SAMV70 successor to the Due?
NXP just seems antithetical to an open platform. Then again Arduino went with Renesas, and they're… not great.
Otherwise it's the openness that would pique my interest. SWD headers, yes 100%. But also the documentation. No half-assed SVDs, buggy closed source flash algorithms (Microchip), wholly undocumented peripherals (looking at you Renesas), stuff like that.
All chip manufacturers are alike in this respect, unfortunately. That whole industry believes that they thrive on secrecy and that simply properly speccing their hardware would already be a massive competitive risk.
Nah, it's a spectrum. Companies like NXP and Infineon are at one end. NXP wants a ton of personal information to access even the most basic docs on some of its chips, sometimes even an NDA. Infineon won't even acknowledge you for the most part.
Companies like STM, RP, and TI are at the other end. STM got super popular because they're cheap and the documentation is incredibly easy to get at. I think RP is following suit.
Renesas puts out some documentation, but it's really rough. Anything that has even a whiff of crypto is completely undocumented. They're also squatting on a few Rust crates where Espressif actually hired a Rust developer to work on their Rust HAL. The most comical thing is that while they version their reference manual they don't seem to update it and instead issue a ton of broad errata that apply to multiple manuals.
Before the acquisition Atmel's documentation was well written and organized.
Agreed. I will say that Renesas does have one thing going for them, being Japanese it has the lowest supply chain/geopolitical risk right now of anybody other than TI.
And the Microchip/Atmel low end stuff is so overpriced/outdated, that you're be better off stockpiling reels of the 8 cent Puya chips or going with the the TI MSPM0.
That's fair. Even so, the majority of the companies whose chips I would consider for specialized electronics seem to be so far down on the paranoid spectrum that it hinders their business.
Sure, some do, but some are coming around and some were never there. Which is why it's important for a company like Adafruit to pick a manufacturer that is towards the open end of the spectrum. Unfortunately NXP isn't that manufacturer even if their silicon is more powerful.
If you replace the Teensy 4.x it would have to be something very close to the same pinout, foot print, cost and features otherwise it would just be a new product. Ideally you would find a way to source the Teensy directly bypassing Sparkfun.
Yes, obviously, but they don't make the chips, so can't you just source the exact same chip, make thing pin compatible and call it a day? Then you'd have a drop in replacement, any changes you make will cause disruption for people downstream.
Doubt it. Of all the issues I run into with Siri none could be solved by throwing AI slop at it. Case in point: if I ask Siri to play an album and it can't match the album name it just plays some random shit instead of erroring out.
Um if I ask an LLM about a fake band it literally say I couldn't find any songs by the fake band did you type is correctly and it's about a millions times more likely to guess correctly. Why do you say it doesn't solve loads of things? I'm more concerned about the problems it creates (prompt injection, hallucinations in important work, bad logic in code), the actual functionality will be fantastic compared to Siri right now!
Because I'm sitting here twiddling my thumbs waiting for random pages to go through their anti-LLM bot crap. LLMs create more problems than they solve.
Um if I ask an LLM about a fake band it literally say I couldn't find any
songs by the fake band did you type is correctly and it's about a millions
times more likely to guess correctly
Um if Apple wrote proper error handling in the first place the issue would be solve without LLM baggage. Apple made a conscious decision to handle "unknown" artists this way, LLMs don't change that.
Of note neither the debugger nor user USB port on that board work with ARM Macs (guess how I found that out). You can connect it to a hub as a workaround but that may lead to data corruption (per the errata).
Also worth noting that the discrete STLink V3 dongles also use the F7 for USB stuff.
Also also worth noting that not all of the Embassy examples are set up to work with Nucleo boards. It's an odd choice but it is what it is.
Embassy provides some traits, but it's pretty much expected you'll be using traits from embedded-hal (both 0.2 and 1.0).
IMO one of the big reasons Arduino stayed firmly hobbyist tier is because
it was almost entirely stuck in a single-threaded blocking mindset and'
everything kind of fell apart as soon as you had to do two things at once.
I think Arduino also suffered because they picked some super capable ARM chips and weren't really prepared to support people migrating away from AVR. Even the Uno R4 is obscenely complex.
Conversely Embassy suffers from being immature with some traits that haven't really been fleshed out sufficiently.
Thanks.