Category Archives: Rants

Immersive – The Code Inception

I snapped this photo the moment before I wrote the first line of code for Immersive. I was on an extended month long vacation with friends in Kauai, Hawaii. The trip was a celebration of life for my good late friend Ed Bortolini. We were staying on Poipu Beach and enjoyed everything from helicopter rides, scuba diving, surfing, kayaking, great dinning, and even backpacked the Kalalau trail.

I started with a programming language I didn’t know, using a framework I had never heard of, solving a problem I had never thought about in an industry I knew nothing about. My only motivation was to create something cool and impact the world. Oh… and at the time I did it for free with no legal contract in place. Immersive’s code base still uses much of this code today and demos of it working are now plastered all over the internet. A mere 9 months later Immersive has generated a lot of press and obtained several clients. Even though I no longer server as the VP of Engineering I’m still a big fan of the company. See below for just a small fraction of the coverage. E ho’a’o no i pau kuhihewa.


CNN Money Article


Live CNN Video


Mashable Article


New York Times Article


Huffington Post Article and Video


ADWEEK Article


The Next Web Article


PSFK Article


BNET Article


BetaBeat Article


BrandChannel Article


CNN Forune Article


PC World Article


Sprouter Article


Business Insider Top 25 Startups

Also check out the YouTube Channel.

The Pace of Innovation – Is Singularity Possible?

Resistance is futile. Over the last century our society has innovated at an ever increasing rate  and machines have come to play a dominate role in our homes and every commercial industry. In just 12 decades mankind has seen the advancement of the first diesel engine, jet propulsion, nuclear power, manned space flight, mass long distance communication (TV, radio, Internet), the first personal computer, the first computer programming language, laser guided bombs, 6K rounds a minute machine guns, real-time video streams, robotic surgery and now machines that build new machines from your living room (MakerBot).

If you continue this list of machine aided advancements you’ll quickly realize that not only do they already outnumber us, but their intelligence is rapidly increasing and our dependence on their services has never been greater. Despite being in the during the worst recession since the “great depression”, there is so much money flowing into the Technology industry that experts are warning of a second bubble . This focus on technology has swamped the USPTO with applications (it can now take over 4 years to receive an acceptance/denial notice). The bottom line: machines optimized to maximize our comfort and minimize commodity costs run almost every aspect of our lives.

It was only 3 decades ago that the first portable personal computer hit the market, known as the Osborne 1. This revolutionary machine weighed ~24 pounds and came equip with 64 Kb of memory and an 8-bit 4 MHz processor. The company went bankrupt only two years after its release. None-the-less it is the origins of the 3 pound 8GB of memory and 64-bit 2.4 GHz dual core processor sitting in my lap at the moment.

As a recent co-founder of an HCI company I’ve starting hearing the term Technological Singularity for the first time. To grossly undervalue and summarize the concept for those who don’t know, the Singularity is a point in time in the near future (~50 years) where Technology is expected to be capable of innovating on its own without human intervention. The premise of the theory is based on the notion that it’s possible for engineers to write an AI algorithm that is capable of finding flaws in a design and optimize itself to no longer have those flaws. As a result, the pace on innovation would in essence occur at the speed of light.

To be honest, when I first heard of the Singularity I laughed and dismissed it as complete bullshit. I’m not so sure anymore… It might sound far-fetched but our society is already starting to use similar real-time optimization algorithms in online search engines, marketing campaigns, and even Walmart distribution channels. Having recently spent a lot of time around the machine learning experts in the software engineering industry, I’ve seen firsthand that they are very good at teaching machines to learn. Even though the field of AI has yet to see its own Einstine, its only a matter of time before one emerges and AI becomes the new split atom.

def Singularity():
     while True:
          do()
          result = measure()
          learn(result)

Will Python ever be taken seriously?

I’ve been writing Python for almost a year now and have been pleasantly surprised by the language. Even though Python has been around for a few decades now it still hasn’t been widely accepted as a production grade programming language.

Its primary use is still in the online world where it has developed a large rivalry with Ruby. It’s been my observation that Ruby wins this match up in the eyes of developers about 80% of the time. In the embedded world the use of Python is almost unheard of and I’ve personally never seen it used exclusively for a full stand alone application.

The language is aesthetically pleasing, requires less typing, and is fantastic for rapid prototyping. It definitely has its quarks and drawbacks but what language doesn’t? When I mention to other developers that I’m currently writing in Python the most common reaction is disbelief. The language just isn’t taken seriously. Why?

Web Apps – The More Custom The Better

I’m a big fan of the WebKit Qt framework integration. It will without a doubt change the face of web based content delivery over the next few decades (previous post). By far the most powerful aspect of this integration is the ability to create custom JavaScript APIs for hybrid web based applications in a browser.

Put simply, browsers can customize the functionally they offer by implementing features into the browser that can be accessed by web content via JavaScript.  Web apps that exercise this functionally must be written to do so, making them “custom” web apps.  Meaning that an app written to utilize features defined in certain browsers will not work without modification in Explorer, Firefox, or Chrome. For example, when GE integrates a web browser into their next generation refrigerators, it would be beneficial to expose an API that allowed an app to check the inside temperature or level of the ice tray.

I’ve been in many debates with other engineers about this option.  Not one engineer has ever agreed with me right off the bat.  Most oppose doing it as it limits were an app can be run and defeats the purpose of web content.  The “purpose” being mass distribution and play-ability on many different standardize browsers.  Most don’t understand why you’d create a web app that can only be executed on a limited number of browsers.  Many also argue that it burdens app developers with having to learn yet another JavaScript API.  Regardless of the objection, this option is widely opposed and not fully understood by most engineers.

The engineering objections are easy to argue against.  First of all, there is no such thing as a standardized browser.  Despite standardization efforts from W3C,  the JavaScript language is a mess.  The big three, IE, Firefox, and Chrome all use a different JavaScript engine and have varying degrees of standardization completeness (some even have features not covered by W3C).  If you want your website to work across the big 3 you already have to add additional code.  I know its a stretch, but this is already a form of web app customization.

The real value add of customized web apps is from a business perspective.  I come from the embedded device world where the number of units sold is the driving factor (compared to the number of hits). By forcing apps to write to your API you are ensuring a certain level of stickiness.  It can be a competitive advantage to limit the devices an app can run on by exposing custom functionality through the browser.  GE won’t want its ice tray level app to run on LG with no modification for the same reason Apple does’t want its apps to run on Android without a rewrite.

Adding a custom JavaScript API to your Qt WebKit browser is easy.  Nokia has a great blog post on its forum here. To illustrate its simplicity I’ve included my own snippet below.

Any C++ object can be complied into the browser and exposed to the JS engine.

class FakeObject
{
public:
	FakeObject(QObject *parent);
public slots:
	posterInternalCommand(QString command, QString data);
signals:
	fakeSignal();
}

FakeObject::FakeObject(QObject *parent)
{

}

bool FakeObject::postInternalCommand(QString command, QString data)
{
	// Do something with data
	return True;
}

FakeObject::emitSignalTest()
{
	emit fakeSignal();
}

After instantiation the object is exposed to the JS engine using a QWebFrame method.

FakeObject* m_pfake = new FakeObject(webView);
webView->page()->mainFrame()->addToJavaScriptWindowObject(QString("fake"),m_pFake);

The exposed object can now be acted on via JavaScript. Public slots are methods that can be called directly from JS. Signals can also be connected to local JS methods making for more asynchronous functionality.

b = window.fake.postInternalCommand("fakeCommand", "fakeData")

function fakeCallBack(){
	x = x + 1
}

window.fake.fakeSignal.connect(fakeCallBack)

Leaders With Over-Inflated Egos

Today I ran across a blog post from a former professor in grad school titled “a reminder for managers and leaders” and contained just the following picture.

As soon as I saw this I knew I had to write my own post. I’ve been both a leader and a follower on many teams and have always been surprised when the manager takes credit for the teams success. The team spent months of there life solving the problem, designing the solution, and doing all the grunt work to actually make it happen and the manager ends up taking the credit? In these scenarios, I guarantee the team will not execute nearly as well the next go around. As a manager, it is a fundamental mistake to take credit for the collective actions of your team.

I’ve seen it happen over and over. This sense of entitlement is particularly high in the start up world, where over-inflated egos can be found on ever corner. Now, I’m not naive enough to believe the manager deserves none of the credit (the good ones deserve a lot). My point is that a great leader/manager will always put the team first and themselves second.

What can a great leader/manager always take full credit for? Failure. Thats it.

Python – Love / Hate

This is what I hate about Python:

1) Really bad at managing threads.
2) All properties require “self” keyword.
3) Inheritance is supported but can be difficult to invoke.
4) Private members are difficult to encapsulate.
5) More difficult to deploy then traditional compiled languages.
6) Default timer class is limiting.

This is what I love about Python:

1) Tuples. Being able to return multiple results from a method is fantastic.
2) Keyword parameters. Methods that can take “dynamic” arguments is freeing.
3) Everything is an object.
4) The “pass” keyword.
5) Generator objects.
6) Method objects.
7) Dynamic code insertion. Swapping out a method during execution.

Git – Free Corporate Code

I find it amazing how much corporate code is still locked down behind bars with CVS. Even though its the least useful reversion control system, its still the most widely used. Don’t get me wrong, 20 years ago CVS was a monumental invention and an absolutely necessary tool for software development teams. The problem is that it tracks files, not content, which makes it a fundamentally flawed tool to manage code.  It wasn’t until I started using Git on a regular basis that I truly understood this distinction.  Let me give my most noticeable observation after making the switch.

In CVS, once you create a file and add content into it, the two are tightly coupled for forever. The file can’t be renamed, moved, or deleted from the repository.  Future iterations, even if done on a branch, are forever burdened with this history. In Git, a file can be moved, renamed, and deleted from the repository without loosing any of the contents history.  This means, I can merge branch B with branch A and move the result into a different directory under a different filename and not loose any of the content history from branch B or A.

This simple difference allows me to iterate freely in a repository. Just today, I reorganized my entire 8 month old repository into different directories with different file names.  Why?  Because good code matures over time. A file that once only contained a data structure definition can evolve into its own thread of control, which for organizational purposes might be stored elsewhere with the other program threads. This isn’t the only improvement over CVS. Its distributed architecture and superior branch management encourages greater developer collaboration. Oh, and its fast… really fast.

Restricting your code prevents iteration which stifles improvement.

Cant learn something new?  Go into retail. Legacy humans can upgrade their firmware with 20 easy commands:
http://www.kernel.org/pub/software/scm/git/docs/everyday.html