Customize the default size of your Terminal in Ubuntu

If you’re like me, you can’t stand the default dimensions of the Terminal window in Ubuntu. Luckily, there’s an easy way to fix that for good!

First, right-click on your Terminal shortcut, and select Properties from the context menu.

Then modify the Command “gnome-terminal” (which is the executable name) to:

gnome-terminal --geometry=150x30+600+600

The first two numbers, 150x30, is the width and height of your window respectively. The second two numbers, 600+600, are the coordinates of where you’d like the Terminal window to appear when you launch the program.



This should work in most other flavors of Linux as well.

Getting bashrc to work in Mac OS 10.6 Snow Leopard

I just got a new Mac on Wednesday, so naturally I’ve been configuring it to my liking bit by bit. I made the discovery that Snow Leopard doesn’t seem to support /Users/yourusername/.bashrc out of the box.

No worries though, there is an easy fix!

Open up Terminal and type:

sudo vi /etc/bashrc

Then at the end of the file, add the following line:

source ~/.bashrc

Then, cd to /Users/yourusername and create a .bashrc file. If you’re not sure what you’re going to put in it just yet, at least do the following:

touch /Users/yourusername/.bashrc

So that you don’t get an error next time you open a shell ;-).

Hopefully that saves someone else some time!

Where is VHDL’s jQuery?

VHDL is a semi-broken language. Now, I definitely agree with Jan Decaluwe’s blog post that VHDL’s handling of determinism is a huge asset (see: Verilog’s Major Flaw). But from a programmer’s perspective the language is obnoxious to use. Some of it’s features are barely passable-like file i/o-and it takes a programmer far more brain function than it should to do things.

Why is this? It’s always seemed like the language was a bit schizophrenic. Judging by the steering committee’s history with shared variables, this might be because hardware engineers resisted software-like concepts early on. And from what I can tell, the IEEE has done a fairly poor job shepherding the language in the 3 decades since it was created. Why it was never open-sourced, I’ll never understand. I actually asked (who, I can’t remember) during a webinar Q&A on VHDL-2008 whether they’d consider open-sourcing the language, and was told that it “didn’t sound like a good idea.” Hmm. Maybe traditional EDA companies are to blame?

That said, VHDL isn’t the only language like this. The fact that a site like wtfjs exists is proof of that. Yet Javascript is widely used all across the web. You might argue that that’s simply because it’s gotten better over the years, and there’s likely some truth to that. I’ll argue, however, that one of the bigger reasons Javascript is so widely used today is that people innovated around the language’s faults by providing frameworks: jQuery, MooTools, YUICappuccino…the list goes on and on. These frameworks help to hide the obnoxious tasks from the programmer, and even provide commonly used programming patterns and widgets to prevent wheel reinvention.

If I had more free time, I’d start writing one by myself. In fact, I almost did. I got so far as to write down what I would put into it. That list is reproduced here:

Serial to parallel
Parallel to serial
Shift register
LFSRs
Interleaver/Deinterleaver
Synchronous & Asynchronous FIFOs
Stacks, Queues
CAM Memory
Caches
Data Packetizer/Depacketizers (data marshaling)
DSP blocks
Micro-controllers
Processor Bus Interfaces
SPI Interfaces
I2C Interfaces
RS232
A/D, D/A Converter Interfaces
GPS Interfaces
Camera Interfaces
Ethernet Phy/MAC

All of these should be easily customized through generics, and contain self-verifying unit testbenches and documentation.

Some might argue that there’s no need for many of these because there’s already purchasable as IP. In my experience, IP always requires more of your own engineering than you expect, despite having paid for it, and support is pretty terrible. The strength of an open-source framework is that it gets the most sets of eyes on the code and documentation to catch errors and continually improve the quality. Often times, purchased IP you can’t even look at the source code to find out what’s wrong! I’ve been burned by this before.

So now all we need is a catchy sounding name and to grab the domain.

If we build it, they will come.

The Case for vhdlUnit

Testing. It’s the most important part of engineering, whether its software or hardware. Yet all too often, it ends up getting sidestepped to get product out the door. It is woefully under-estimated in budgets and schedules. Then once the schedule starts slipping, engineers have trouble fighting the urge to skimp on testing to get things “done” faster. Most frustrating of all is the pressure from management to cut down testing hours in project plan estimates, because it “costs too much”, “takes too long”, or “doesn’t add enough value.”

The last 9 months of my life have been enlightening, jealousy inspiring, and ultimately motivating. Through the course of a number of side-projects I’ve been learning and using Java, PHP, Python, Javascript, and Tcl. The most poignant revelation to come from my varied experiences is that rigorous testing doesn’t have to be too difficult or costly to implement, and in fact, most languages have made it so easy to automate unit testing that the only excuse left to not do it is laziness. All of these languages have unit testing frameworks (most notably JUnit and PyUnit), which make writing tests and reporting results beautifully simple.

By day, I’m an FPGA developer and write mostly VHDL. Despite what some software developers might think about FPGA developers, we do quite a bit of unit testing, albeit without a formal framework. And that’s the problem. As schedules start slipping, the stress and anxiety of missing a deadline cause testing rigor to decrease. Ironically, if the project had a formal unit test suite that could be relied upon, stress and anxiety would be lessened because engineers wouldn’t have to fear breaking something in the frantic push to ship on time. They could simply launch their regression tests and review the results.

The most difficult part with engineering that takes place inside a text editor is that once our creations “live” in the physical world, they become a lot more difficult to test and diagnose problems.

Suppose you’re an analog engineer designing a bandpass filter using discrete components. You just got the manufactured boards back. How do you know the filter works? Do you just look at it and say “well, it looks like a bandpass filter, so it probably works.” Of course not. You connect a signal generator up to the input, place a spectrum analyzer on the output, and you sweep across input frequencies and measure the output signal to confirm your filter has the correct attenuation for out-of-band frequencies, etc. If you were to discover a problem, you’d grab a Fluke and check to confirm that the passive components have the proper quantities. This is in essence, a unit test.

Test suites written with frameworks like JUnit give you that warm, fuzzy feeling that all of the little pieces that make up your design are working correctly. This reduces the stress on your engineers, which I believe has a strong, direct correlation with product quality.

VHDL desperately needs an open-source unit testing framework modeled after JUnit. In general, FPGA and ASIC development methodologies have severely lagged innovations in the software world. Kent Beck, the father of Test Driven Development, wrote his landmark Simple Smalltalk Testing: With Patterns paper in 1989. It’s now 21 years later, and there is no vhdlUnit.

So, who wants to get started?

iPhone prices to drop on Verizon and AT&T?

It’s basically accepted as fact by now that the iPhone is coming to Verizon. When? Probably this Christmas, or early in 2011. The point is it’s going to happen, and when it does, things are going to get interesting.

AT&T is currently believed to be paying somewhere around $350-$400 in subsidy for each iPhone sold. That’s a nice, healthy chunk of change for Apple. Since AT&T has exclusivity of the iPhone, they have decided that this subsidy is an acceptable cost for the higher monthly service fees they receive from smartphone customers on their 2 year contracts.

Over on Verizon, there are now a bevy of Android based smartphones selling like hotcakes. Something more interesting is happening on the Android side though. Because there are so many handsets to choose from, you can get an Android phone for cheaper. Many Android handsets are priced below $99. The Motorola Droid has been out for only 9 months, and already is priced cheaper than at launch. Part of this is due to the Droid X’s release–only 8 months after the Droid. But the fact that Android handset makers need to be on a more aggressive release schedule is evidence of how hard they’re competing against each other.

Compare this to Apple. Their handsets don’t have to compete with other iOS based handsets on AT&T. As a result, Apple can keep their prices higher. Their latest iPhone model released every year in June has had the same price points: $99 and $199, since the iPhone 3G was released. They don’t have to drop their prices, because there’s no alternative iOS handset.

Will this change once the iPhone hits Verizon? I bet it will. How else can Verizon and AT&T compete for iPhone customers, if not on price of the handset? It makes more business sense to give customers a discount on the handset, rather than drop the price of service contracts. Dropping the price of the iPhone another $30-50 is a drop in the bucket compared to losing $10/month over the course of a 2 year contract. Despite the iPhone being available on Verizon and AT&T at that point, it’s not as though customers can jump ship to the other carrier–the cellular radios are incompatible.

When the iPhone comes to Verizon, expect to see the price drop. But this won’t affect Apple, mind you, they’ll get a slightly larger subsidy from the carriers as they compete for iPhone customer’s wallets.

Is Consumer Reports misleading the public about iPhone 4?

By now everyone in the world has heard that Apple’s shiny new iPhone 4 allegedly has a fatal flaw worse than the Death Star. Consumer Reports claims that their “engineers have confirmed that iPhone 4 has an antenna problem…” and that it “…really is only with the iPhone 4.”

Oh really?  Just watch this video I recorded this evening with the original iPhone 2G, on T-Mobile USA’s network:


Look familiar?!

Anand Lal Shimpi over at  AnandTech did quite a rigorous investigation into iPhone 4’s reception and observed that:

“squeezing it really tightly, you can drop as much as 24 dB. Holding it naturally, I measured an average drop of 20 dB.”

Interesting! I measured an average drop of 20 dB when holding my iPhone 2G naturally, as seen in my video above. But here’s the kicker from Anand’s research:

“From my day of testing, I’ve determined that the iPhone 4 performs much better than the 3GS in situations where signal is very low, at -113 dBm (1 bar)…I can honestly say that I’ve never held onto so many calls and data simultaneously on 1 bar at -113 dBm as I have with the iPhone 4, so it’s readily apparent that the new baseband hardware is much more sensitive compared to what was in the 3GS. The difference is that reception is massively better on the iPhone 4 in actual use.”

In my opinion–and I happen to be an RF communications engineer at one of the largest cellphone designers in the world–the only thing Consumer Reports really can say with certainty is that signal strength depends on a variety of factors, one of which is dependent on how a phone is held. Any smartphone, or really any radio for that matter, will have it’s performance affected by how the antenna is placed, held, etc. It’s irresponsible and dishonest for them to claim anything otherwise.

But wait a minute! Consumer Reports said that they tested the iPhone 4 in a “signal proof room” that simulates “real life conditions”! Before addressing that, some background:

Your smartphone, and the cellular tower (or “base station”) both are really just digital radios. When you make phone calls, you’re talking on a walkie talkie that digitizes your voice and sends it to the base station using radio waves. Once there, the base station relays that voice data through the carrier’s network to the phone on the other end of the call, and vice versa.

For this whole system to work, you need to have a certain amount of Signal-to-Noise ratio, or SNR, at both the smartphone, and at the base station. SNR is measured in dBm, or decibels referenced to one milliwatt (mW), which is the standard unit of measure for RF engineers. If your SNR drops below the level of sensitivity of either radio, packets (small chunks of data, i.e. your voice or network synchronization traffic data) start getting dropped. If SNR is too low for too long, too many packets are dropped (this is where your voice starts “breaking up”) and then the call is dropped by the tower if SNR doesn’t recover to a level above sensitivity so that you don’t end up consuming precious bandwidth and preventing others from making calls.

What can cause SNR to drop? A whole slew of things:

  • Distance from the cell tower
  • Interference from other radio waves
  • Attenuation from buildings, trees, trucks passing by, and how you hold the phone 😉
  • Multipath distortion (a phenomenon in radio communications where multiple copies of the signal sent bounce off of buildings, etc. and arrive at slightly delayed times, causing inter-symbol interference (ISI). Think of it as the radio confusing 0’s and 1’s, causing corruption in packets sent. Though 2G, 3G, and 4G technologies do have equalization circuits and forward-error-correction (FEC) circuits that can help combat this, it does require a stronger SNR that otherwise to help decode the bits sent over the air.

So, back to the “real life condtions” in Consumer Report’s “signal proof room.” The room they’re referring to is what’s known as a screen room. In their video, they claim that this environment simulates “real life conditions”. No it doesn’t! Screen rooms are designed to test radio performance while ignoring real-life issues, such as multipath, deep fades, interference, etc.

Does shorting the two antennas together cause a degradation in the performance of iPhone 4’s antenna? Sure. Does holding any phone change the performance of the antenna? Absolutely. Is the iPhone 4’s antenna a flawed design? Absolutely not. Could it be better? Definitely…but so can anything. Anand does suggest that Apple should “add an insulative coating…or subsidize bumper cases”. I’m not sure I agree, at least not yet. Depending on how Apple designed their antenna and radio front end, they could improve radio performance with a software update–I’ve implemented algorithms that did precisely this.

All in all, it seems clear that Consumer Reports didn’t prove anything is “flawed” with the iPhone 4, and acted irresponsibly in making the claims they did. The evidence they gave doesn’t support their claims, and was more smoke and mirrors than concrete information. It’s going to be difficult for me to trust their reviews of products in the future. As for what Apple does next, stay tuned for their invitation only press conference, scheduled for this Friday.

node.js and SeaMicro, a match made in the clouds?

A project I’ve been watching for a little while now, node.js, aims to solve the problem of scalability in web applications. It does so in an extremely efficient manner by creating an abstracted event handler that processes requests without blocking or spawning new threads. Traditional webservers, like Apache, end up starting a new thread for each connection. When you start having thousands of users peg your site, this becomes extremely costly.

It’s sort of shocking how long web servers have been behaving in this way. Except for the fact that hardware is cheap and thus, many end up throwing more hardware at the problem. That is certainly an option in many cases, but it isn’t really ideal, or the best allocation of resources. Additionally, the web has also become far less static, and Web 2.0 apps compound the problem.

Then today I saw an article over at ArsTechnica about this new server architecture from SeaMicro, the SM10000. They’ve crammed 512 Atom processors into a system where networking and storage resources are pooled in a way that is hidden from each individual processor.  It sounds like a perfect mate for node.js. It would be very cool to see how the two together would perform on web applications. I can envision using this to create a very efficient, scalable cloud-server, while being environmentally friendly to boot!

Verizon-Motorola Decide, “We don’t need females to buy the DROID”

If you’ve seen the latest salvo by Verizon’s marketing ‘geniuses’, you’re led to believe that only girly-girls buy iPhones. Apparently if you want to be a man, you need a DROID.

If you thought past TV spots for the DROID were bad, check out the latest. You’ll swear you can hear the ‘Team America, World Police’ theme song in the background.

This ad campaign seems hellbent on condemning the DROID to be a niche device rather than one with consumer mass-appeal. No wonder rumors of Google launching their own phone had everybody buzzing over the weekend. There are too many niche devices emerging on the Android platform, and Google is rapidly turning into the Microsoft of smartphones by providing the OS to hardware manufacturers but not launching any devices themselves.

It’s not clear that replicating a Microsoft business model will be profitable for the likes of Motorola, HTC, et. al. Just look at what has happened to PC margins over the past few years: you can go buy a netbook for $200 at razor-thin margins to the manufacturer, yet Apple continues to grow their laptop and desktop market share while commanding margins in excess of 30%.

Google may not be a hardware company, but Motorola better hope that they’re not thinking, “oh this is why Apple made their own phone.”