Hacker Newsnew | past | comments | ask | show | jobs | submit | flaub's commentslogin

Google for "802.11 basic rate"

The basic rate is the rate used when one station (any device with a TX/RX) broadcasts a frame (we call them frames at layer 1 (PHY) and 2 (MAC)). The transmit rate is the rate used when a station transmits a frame to another specific station. BTW, this has nothing to do with IP, in fact, this is true even if IP is not in use. The low bit of the 1st octet of a MAC address determines whether a frame is multicast or not (technically the basic rate is used for multicast frames, not just broadcast ones).

Why is the "transmit" rate so much higher than the "basic" rate? It all has to do with negotiation. If it turns out that when you decide to send a frame, someone else was using the medium, you'll only know this if the other side has sent you an ACK. This means you can do your backoff and try again after a short interval (usually measured in microseconds). It becomes infeasible to expect an ACK from all of the recipients for a broadcast frame however. So instead we simply send broadcasts at a much lower rate.

Also note that the transmit rate is constantly changing, based on how well ACKs have been received. This is adaptive rate control.

Another thing to note is that when you are using infrastructure mode, stations (like your phone, PC, etc) send multicast frames to the AP and then the AP turns around and blasts it out broadcast style. So in this case, the frame is initially sent quickly as unicast to the AP via the transmit rate, but then the AP sends the same frame out broadcast at the basic rate.

Just by having stations continually send multicast frames you can lower the channel to the basic rate for everyone on the same channel. Nothing really sneaky here either. You don't even need to be on the same BSSID since this is a PHY issue.


I believe humans (and possibly other mammals) wouldn't have made it throughout our evolution without love. Look at how helpless humans are for at least the first 8 years of life. That's 8 summers and 8 winters. There are many many opportunities for parents to move on and say, "This is taking too much of my time and energy, I want to do something more enjoyable."

I think this is one thing that older generations have context for that younger ones (before becoming parents) can simply be unaware of. I know, I became a parent within the last year and it has changed my thinking on many things.

Even if we tried to live a life without love, I don't know that it's possible. Individually perhaps, but not as a strategy to have the human species continue to go forward. Empathy is something that healthy brains do naturally, and is really useful to keep a tribe or a society going (could even be required). This goes back to keeping children alive; when parents can't or won't, the tribe steps in.

Perhaps the modern world can make love obsolete; I hope that isn't the case. I think our minds have evolved to live in a pre-modern era. Until we fundamentally change our biochemistry, I'm pretty sure we'll have to make do with it (along with it being a hindrance).



And another port of GWT for .NET is http://dotweb-toolkit.com. It uses a decompiler in order to translate .NET assemblies into javascript.


Interesting idea, I'm mostly wondering how you're going to deal with different genres for different tastes. Eventually filtering and targeting seem like they will become the number one issue for both users and providers.

I actually like how the site reminds me of a vinyl cover, however it might be too confusing for (some) people to use. I couldn't figure out how to bring back the info about the artist after it fades away.


I've been working on a .NET clone of GWT as a side-project for awhile. It doesn't have a widget library yet, but it does have web-mode and development-mode. It works by decompiling MSIL which means you can use any .NET language you want.

http://github.com/flaub/DotWeb

Not much in terms of demos, but the core technology is all pretty much done. Plugins for FireFox (NPAPI-based) and IE work so that you can set breakpoints in Visual Studio. Web-mode does a decent job of optimizing right now because it does method dependency analysis.


Yo should make a post about it. Make sure to mention how it relates to similar Microsoft's own efforts.


When you're looking for an expert in a field, do want to find someone that claims to know what they're doing because they read a book somewhere? Or do you want to find a seasoned pro that has actually tried and possibly failed using different techniques? Their failures may give them more insight and experience than their successes. Especially if they can tell you what they did to improve themselves afterwards.


Becoming an expert has far more to do with knowing how to get things done in a known time frame than knowing every little piece of triva. One you code in the same language with 3 or 4 compilers you start to look at edge cases as a dangerous no mans land to be avoided if at all possible.

PS: I worked with code from the 1984 Macintosh days that was still in use in 2005. You could see where people had updated from Motorola 68020 to PPC and if it had not been stripped out some poor coder might have updated the remaining ASM to x86.


For me, I've always thought that the 'greatest developer' was really the one that was the greatest at debugging. Any developer can write code, but my experience has shown that only a subset can properly diagnose issues to find their root cause. Those on the team that can do that will save you huge amounts of time and money because things will go wrong (especially in C++).

If you add up all the time it takes to write a particular system, I think you'll find debugging and fixing bugs to be one of the largest chunks of time, probably beating out the time it actually took you to write and possibly design the system. Obviously a design more suited to maintainability will decrease this somewhat.

Given this, I'd say this is an excellent question. Especially if the interviewee can tell you why it's a bad idea, or what the gotchas are with using 'delete this;'. It might not find developers with other positive characteristics like what makes for simple and elegant designs. But if you have a mix of questions to cover the many aspects of development in the interview, you should be good to go.


Standard C/C++ interview question for me for the past few years: "how would you go about debugging a program that segfaulted inside of malloc".


IMO, that's not a bad question. C++ is not my thing, but I don't see anyone mentioning.

A) Running the code on another system to double check that the systems memory / OS has not been corrupted. I once wasted ~6 hours on a corrupted production box, so now the new rule is when system call fail double check it's not just the machine.

B) Double check that none of the memory allocation / book keeping memory had been over written, memory had not been freed twice etc.

Honestly, I usually try to do that type of stuff by direct code inspection. If it's failed once I am probably going to need to debug related code in the not to distant future so really understanding what's going on is important, but hack and slash debugging can be fun.


Since we're chiming in on possible solutions, why not do the most general thing first? Check memory usage of the system and process right before segfault, check environment variables that govern malloc behavior (i.e. MALLOC_OPTIONS, OS X-specific ones), check if using (s)brk() system call succeeds right before that, check ulimits.

Then you can either litter break points and start stepping or load the core file in gdb, and start looking closer.


This feels like kind of a cheap answer, but I know that with Visual Studio you can step into most standard library functions, even if it ends up being at the assembly level; I assume the same is true for gdb.

Is the not asshole answer to be stopping right before the malloc and taking note of memory consumption and determining how much malloc is trying to allocate?


Stepping through malloc isn't going to help you much in this case. But most people I've met that have written a lot of C/C++ code have seen this bug at least once.

When you start asking about how much memory malloc is asking for, that's a tip-off in the wrong direction for me too; malloc handles "out of memory" pretty gracefully.


Run the program under valgrind.


CHEATER!


I'd start logging all the previous calls to malloc/free?


You would then be buried under an avalanche of log lines, but then I'd just ask "what are you looking for?".


So I'm a little confused. You cite that the EIA reports 5.22 million BOE per day, an increase from 1991. I'm looking at this page:

http://tonto.eia.doe.gov/dnav/pet/hist/mcrfpus2a.htm

I might be reading it wrong, but it seems to indicate that in 1991 7.4 million BOE were produced and in 2008 4,9 million BOE.

Is there perhaps a different source on that site that you can refer us to? Or are you factually incorrect?


Also: http://www.petroleumworld.com/story09090312.htm

Excerpt:

<snip> U.S. oil output is benefiting from the addition of major deep-water fields, including BP's Thunder Horse, that are helping offset production declines onshore and in shallower Gulf waters. In many cases, these deep-water fields were discovered years ago but are only now coming on line, given the massive costs and technical challenges associated with them.

The combination of favorable factors should lift U.S. crude oil production to an average of 5.22 million barrels per day in 2009, up from 4.95 million barrels per day last year and the first annual increase since 1991, according to the U.S. Energy Information Administration. </snip>





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: