So this is my first, and possibly only, book review I'll do, but I felt I had to write something about The Circle.

The Circle came to my attention through the monthly book club I attend. Someone recommended it, so I got the audio book and dove in, not knowing much. It turns out The Circle will probably be my favorite book of the year.

The Circle is the story of a massive corporation, called The Circle, that has built a social network to rival all others. Throughout the book we learn that they have surpassed and then purchased Facebook, Twitter, Google, et al. Various technologies they have built are described, such as a real identity system, online payments, ad networks, analytics, and more. Basically everything that Google, Facebook, and Twitter already do, but taken to the next level. The Circle strives to know everything about everyone.

At the same time, everyone loves them, because they outwardly project the cool Silicon Valley lifestyle. They have a hip campus with gyms, dorms, cool architecture, amenities for all of their employees, etc. They hold free events and support artists. The goal is to portray it as the coolest college campus you can imagine. Everything they do is about being open and transparent, so some of the buildings are made entirely of glass or some glass-like substance. Walls, floors, and ceilings are transparent, so you can see everyone in the building. I guess in this place, no woman dares to wear a skirt.

The story follows a twenty something named Mae, who is excited to start working at The Circle. She managed to get a job through her friend Annie, who is one of the high ranking employees there. There are other secondary characters, but oddly, I didn't find the characters that important to the story being told. All of the characters could be boiled down to two types: pro-Circle and anti-Circle. The pro-Circle people are ridiculous narcissists, unbelievably jealous, juvenile, short-sighted, and easily manipulated. The anti-Circle people are thinking of the future and others, but are oddly slow and dull witted. Using these two primary types, the story easily sets up the conflict of transparency versus privacy.

Using transparency as its battle cry, The Circle starts infiltrating people's lives. The first step is through building a platform basically the same as Facebook. Share your status! Share your feelings! Send meaningless "smiles" and "frowns". Sign online petitions. This all leads to people feeling good about themselves by sharing and "taking action". A world built on empty gestures.

Next, they go a step further by placing webcams all over the world, and using them as cheap surveillance. Anyone can log on and watch them. The video is preserved forever and can be replayed by anyone. The Circle is documenting everything, everywhere, all of the time.

The third step is putting the camera on actual people and broadcasting everything they do. The Circle gets a US Senator and Mae to wear the devices and broadcast constantly. Again, using transparency as a battle cry, The Circle coerces all government officials to wear the devices and share their unedited lives. Those that don't find themselves embroiled in scandal from information found on their computers.

The finale is when Mae helps The Circle come up with a plan to make registering with The Circle the law worldwide. By using their real id system and all of the information they have, they become the de-facto means of voting. Anyone that doesn't register is harassed and chastised. The harassment even leads to a few, heavily telegraphed, deaths.

In the end, The Circle "wins", and privacy is no more. Everyone is tracked constantly, monitored, and recorded. Secrets are no more.

All in all, I would recommend everyone read this book. It sums up the nightmare scenario for all privacy advocates. One company taking over everything.

Obviously, the plot is a bit far-fetched, but not as unlikely as some might think. Certainly, the road to such a corporation controlling all information would be long and difficult, but if you really think about it, companies all over have enormous troves of information about individuals. People log in to Facebook daily and write about everything that is happening in their lives. People check-in on Foursquare and let a company track their movements. Google Now actively monitors everything you do, and predicts what you might do next.

All of these services have some use for consumers, otherwise they wouldn't use them, but you have to weigh the costs and benefits. What are you really getting out of Foursquare? What are you gaining from telling Facebook all of your favorite bands, movies, tv shows, brands, etc? What minor time savings are you getting from Google Now?

Most people assume they'll just be shown ads, and that is the end of it, but it's not. Companies are building massive databases on individual people, and that information can be used for anything. What happens when the next Joe McCarthy comes along and goes on a witch hunt? What happens when someone accesses the information and decides to blackmail you? What happens when you rely on one of these systems and they shut you out? Lastly, what happens when they violate laws or outright lie about what they're doing and pay almost nothing?

In the end, each person has to make the decision for themselves. I personally believe in being wary. I share very little on Facebook and only use the iPhone app. I don't use Google+, and don't stay logged into Gmail on my primary browser. I use a browser extension to block tracking from social networks and ad networks. All of this is to say, you don't have to completely disconnect from everything, but you have to be careful about what you do connect to.

Be skeptical.

AuthorMichael Cantrell

I just launched Bulletin.  It's the middle of the night, but I just finished, so I'm pretty excited. I'll have a better blog post sometime in the next 24 hours, but for now I just wanted to make a note of the launch.

AuthorMichael Cantrell

As Daniel and I are prepping for our next beta release, I thought I would follow up here to talk about where we are.

At this point, I'm confident in our ability to launch soon. We've built a great product, that both Daniel and I are using for our day-to-day RSS reader. With each release we've added features, and the reading experience has greatly improved.

With this release Daniel did an overhaul of the design and UX, to make sure the focus of the interface is the articles. At the same time, we added Instapaper, Pocket, and Pinboard saving features, as well as more keyboard shortcuts and saved settings. Lastly, I improved our fetcher to back off of dead or broken feeds, so that we save our resources. It also evens out our server load over time, by distributing feed fetch times. As a bonus, we can easily distribute the fetcher over multiple servers.

For the next release, I know we'll be working on payment and subscription options, trials, and APIs.

We're still open to adding a few more people to the beta as well, so if you're interested, get in touch.

AuthorMichael Cantrell

As most people know, Google decided to kill Google Reader in their latest "spring cleaning" on July 1, 2013. While the actual move wasn't a surprise, I think most people expected more of a lead time than 3½ months.

The reasons for Google shutting down Reader are obvious to those that follow them closely. There was no money in keeping it around. Based upon educated guesses, one can assume there were at least a million users, and likely far more than that. The problem is that few of those users were using the website. I believe most were using apps that used Google Reader as a syncing service, thus not giving Google any of the pageviews. They are paying to host a syncing service for others and getting nothing for it. This doesn't help them sell ads, and they can't put ads in other people's apps. A losing proposition for any Google product.

So, what does it take to be successful and make a business out of this? There are two basic options. The first is to charge users money for the service, a novel idea for Google. The second is to lock down your service to only your website and apps, blocking out third parties, which would violate Google's "open" stance. I believe that charging users money is the way.

With that in mind, I'm happy to announce that I am working on such a product. I call it I started working on a few weeks ago after asking if anyone desired such a service on I received many responses asking that I follow through and make the service. I think the reason was that everyone knew the end was nigh, just not the specifics.

Unfortunately, the shutdown of Google Reader caught us off guard, so we're not quite ready, but we'll be getting there as fast as we can. I'm working with my friend Daniel to get things ready for testers in the next few weeks.

The basic plan is to charge a low monthly or annual fee for access to the service. From there we will have iOS apps and an open API for 3rd party development. Also, based on Marco Arment's post today, we will provide a Google Reader lookalike API, so that existing clients can quickly integrate with us. Community feedback for, or any service you choose, will be important so that we can build the next generation of RSS services. I'd love to hear from people on ADN or Twitter.

AuthorMichael Cantrell

In the last 6 months, I have had many opportunities come my way. I'm trying to start my own business. A former coworker presented me with another potential business. I've had contract work come up. And just recently I've decided to do a series of apps on my own.

While all of these opportunities are great, there is only so much time in the day. I can only do so much work in a week, and that means picking and choosing what I work on. If I don't do that, I'll try to do everything, and finish nothing.

With that in mind, I recently had to go back on a commitment I made. I had to back out of the potential business with my former coworker. She had a great idea, I just didn't have the time to commit to it. When you're trying to start a new business, you're trading your time for a future payoff. In order to stay motivated, I feel that you need to have a great amount of confidence in the idea. Unfortunately, the long term potential I saw in the project didn't give me enough faith that it would pay off to the degree the other work I had would.

The contract work I'm getting involved in has much more potential than some of my past contracts. The business I'm starting up has real potential to not just make money, but help promote every skill I have. The app series I want to build should not only provide some income, but actually help people.

The feeling of charity, prominence, publicity, and boundary pushing makes me feel better about these projects than hers. The project she had while a great idea, would not have been as fulfilling to me. I would not have pushed any of my skills.

With that in mind, I hope she can find someone that can fill my role. I hope that she can find someone that is as excited for her project as I am for some of mine. I hope she finds a developer that is able to push their own skills to the next level through the project.

It is hard to turn people down, but sometimes it must be done.

AuthorMichael Cantrell

I just came across this Facebook post by my good friend Patrick Williams.

What is it? What is the secret to breathing life into my creations. Looking at and hearing the things I have made it all feels so...weak and lacking. Like a plastic plant in a doctor's office, it is alright from a distance but upon closer inspection it is missing substance.

Patrick is an artist and a musician, and he seems to be going through a difficulty I have seen many creators, including myself, go through. He is not happy with what he has created.

Patrick has been working more, or at least more publicly, than I have seen in the past on his artwork, specifically digital painting. In the last few months, I have seen him make incredible improvements, while at the same time being unhappy with the results. This is something that all creators must face.

Unfortunately, the best response I could muster in the moment was the following.

I can't speak for art specifically, but I can speak for the creative work I do. I'm never happy with what I make. No matter how much detail and work I put into it, it still doesn't seem to be enough. That seems to be the creator's dilemma. However, that doesn't mean that others don't appreciate and see substance in your work.

How helpful that is, I'm not sure, but I want to expand upon the idea.

In the work I have done, I am constantly pushing myself to do new things, learn new tools, and take on new projects. Compared to just a year ago, I have progressed vastly in terms of skills. This growth is a great thing, but it is also a double edged sword. The growth leads to everything I have done in the past looking somewhat amateurish. I can see every mistake I made. I know how it could have been better in some fundamental way.

Along with knowing how I would do better, I also know all of the shortcuts, rough edges, missing features, and weaknesses of what I have built. As the creator, I can see flaws no one else thinks to look for. When someone else views my work, they see it as it is, not as I know it could be. The creator is left with the unrealized vision of their work.

The difference between the vision and reality will always be vast. Nothing can be perfect. If you believe you have made something perfect, you are deluding yourself.

Due to the disparity between vision and reality, a creator must focus on the positives. You must focus on making sure what you created today is better than what you created yesterday. You must focus on growth, without judging your past. Just because today's creation is better than yesterday's doesn't diminish the value of yesterday's work. Lastly, be aware of how others appreciate your work. Watch how it enriches the lives of your audience.

By focusing on these positives, a creator can accelerate their own growth. They can become better at what they do, and get closer to that ideal. You will only feel satisfaction in meeting a goal that was difficult to achieve.

An easy goal is not worth having.

AuthorMichael Cantrell

For at least a decade now, I have wanted to start my own business. I've had a range of ideas on what that business might do, some of them good, some of them bad. I tried building and fixing computers; I tried to start forums with friends; I even tried to make video games with friends. Every time I failed to follow all the way through. The reasons aren't really relevant. What is relevant is that each time I learned something new. I learned why I failed, what went wrong, and what to try next time.

None of the failures were catastrophic, I was and am still young, but they were important. I learned that getting customers is hard, keeping customers is harder, and finding the right customers is hardest. I learned that it isn't enough to build something interesting, you need an audience. I learned that getting the right people together at the right time is nigh impossible. People tell you what they think you want to hear. Business partners will say they are interested, but aren't really committed. Business partners may have the will, but not the time or money. Business partners may just be looking to come along for the ride.

All in all, the lessons are hard, but the benefit is knowing to not make the same mistakes again. With that said, I am proud to announce that I finally have a business venture that is going to get off the ground. I am already much farther along than I have ever been, and I have no intention of stopping.

I don't want to announce too much now, but over the next few weeks I will illuminate the business more. For now, I just want to reflect on the progress I have made. Not just to tell others, but to reinforce to myself that this is really happenning.

Almost 2 years ago, I was approached by a new friend, Tony, with a business idea. He had been thinking about it for a while and wanted to know what I thought. The idea was simple enough. Tony had interested future customers. And it was the kind of app I would want to use. Tony and I talked, and he asked if I was interested in joining him. I had to say yes.

That was 2 years ago. In that time I've built prototypes, learned whole new technologies, rebuilt the prototypes, learned iOS development and Objective-C, became a better server administrator, and even changed jobs. Some months I made progress. Some months I didn't. What never failed was Tony believing in me, and me believing in Tony. Finally, 8 months ago, the prototypes reached a state where they were no longer prototypes. They were ready for finishing touches, but we still needed an iOS app. We needed the core product.

Today, I can say that app is ready. That is why I am writing this blog post. For the first time, a business I have worked on is on track to truly launch. Soon, we will begin our media push. We will launch our websites. We will put our app in the App Store. And we will have a fully-fledged business.

We have the domain name.
We have the Facebook page.
We have the Twitter account.
We have the LLC.
We have the product.
We have the customers.
We have the skills.
We have the drive.

This is real.

I believe this is the pivotal moment for myself and Tony. We've been working for this, and we believe we will succeed.

The future is unwritten, so I intend to write it.

AuthorMichael Cantrell
3 CommentsPost a comment

So, I was reading Hacker News and came across this gem of an article: Bitfloor Hacked, $250,000 Missing

The article covers an alleged hacking of a Bitcoin exchange, leading to the loss of $250,000. For anyone who doesn't know, Bitcoin is an electronic currency that enables anonymous, irreversible transactions. The primary goal seems to be eliminating a central authority.

The lack of a central authority plus anonymity makes theft on Bitcoin relatively simple. There are only two requirements for theft, access to the Bitcoin wallet, and the wallet being unencrypted. These sound like they would be difficult to achieve, but the number of high profile thefts shows that isn't as difficult as you might think. Most wallets are going to be accessible through a network, otherwise the user can't make transactions. Then the encryption relies on the user being vigilant, which is never guaranteed. People make mistakes. These hacks can obviously be made, and they can be very lucrative.

Add to that these large exchanges where large numbers of Bitcoins are stored, and you effectively have a bank. Except this bank is not insured against theft, fire, data loss, nothing. For the people who lost their "money", it is just gone.

Now to the nefarious part. I used the work "alleged" above for a reason. We are going on the word of the owner of Bitfloor that he was hacked. What is to say he didn't take the money for himself? I obviously have no proof, and I don't truly believe he took the money, but it is possible. The money is untraceable. Whoever has the "wallet" has the money. He could say he was hacked, and just use the wallet for himself later. Slowly convert it back to USD through transactions with other exchanges, and he can have $250,000.

Why not? At least I know my bank can't walk off with my money.

AuthorMichael Cantrell

Last weekend I finally got to the point in an application where I needed to build a private API. I finished the website and wanted to start on the iOS prototype. My first thoughts were:


  • How can I secure the API?
  • What are my endpoints?
  • What should my URL scheme look like?


Naturally, I started by searching the web for "api design", which turned up a whole lot of duds. Naturally there were irrelevant results, but the few that did seem relevant didn't have any specifics. Plus, the few that did have any specifics all contradicted each other. Use REST, don't use REST, strict usage of HTTP verbs, only use GET and POST HTTP verbs, etc. Ironically, the only consistent suggestion was to be consistent. This should be obvious, but it is amazing how often the obvious is ignored.

Since I couldn't find any unifying principles, I decided to come up with my own answers to the three questions I had.

How can I secure the API?

The API I was building was for private use only. The plan is to not allow public access, but you can never stop everyone. So I had to choose something that would keep most people out without creating too much trouble for myself.

I started by looking for encryption patterns I could follow. Maybe I could encrypt the data returned and parameters sent? On the surface, this sounds fine, until you start thinking about replay and man in the middle attacks. If you have a simple endpoint with few or no arguments plus similar data returns, someone can bruteforce your encryption eventually. Not to mention the added strain on your app and servers of encrypting that data. Not to mention, this can be done with HTTPS, so the app doesn't have to deal with it.

HTTPS can be broken with a man in the middle attack though, so how can that be prevented. At this point I started thinking seriously about OAuth. I only knew about 3-legged OAuth at the time though. I didn't need the overhead of user authorization though. Luckily I came across an article discussing 2-legged OAuth and comparing it to 3-legged (2-legged vs 3-legged OAuth).

2-legged OAuth provides everything needed. You get secured communications over HTTPS. You can block replay attacks with a nonce. You can prevent duplicate calls using a timestamp. You get secured communications with the HMAC verification. Plus, if your secret key does get compromised, you can change it easily.

I ended up going with 2-legged OAuth, because it gives me as much security as I could build myself, plus it is battle tested. Standard encryption libraries are used, making them less likely to have programming flaws. Plus, the pattern is known if I ever want to open up the API to public consumption.

What are my Endpoints? / What should my URL Scheme look like?

Now that I've decided on my security, what are the actual endpoints and url scheme? Most of the time, I see the suggestion that your endpoint URL should be a noun, and the HTTP verb describes the action to take.


  • GET - Fetch resources, no modification
  • POST - Send data to be updated
  • PUT - Also send data to be updated (This and POST are debated with some fervor)
  • DELETE - Delete the resource


Using those 3 or 4 (if you like PUT) verbs, you can perform any action. My personal disagreement with this pattern is that you end up with one endpoint performing too many actions. REST Purists will lose their mind at that last sentence.

Endpoints should be descriptive. It should be obvious what is going to happen. If I see an endpoint /message and I'm told to POST data to it, I don't know what data I can send and what it will do. Whereas /message/:id/archive is much more descriptive. Then make it a POST verb, and you comply with the spirit. My endpoint tells you what it is going to do, but only responds to a verb telling it to modify data. Here is an example for dealing with messages on a service.

  • /messages - Get a list of messages, use GET with parameters. Pagination is a good example.
  • /messages/:id - Get a specific message using GET
  • /messages/:id/delete - Delete a specific message, use POST since we're modifying data.
  • /messages/:id/archive - Archive a specific message, use POST since we're modifying data.
  • /messages/new - Send a new message, use POST with data to specify the message

The above is just an example, but it is simple for someone to read and understand. The HTTP verbs are conformed to, and the endpoint describes what it does. There is no ambiguity. This is the pattern I have used and had the most luck understanding/explaining.

In the End

It's up to each individual/group how they design their API. The only thing I hope, is that you make something that makes sense. Documentation is great, but it's easier if I don't have to read any documentation. With the above message endpoint example, everything seems easy. If you want to know what it comes with, call the API and look at the JSON/XML/??? that is returned and read it. Always remember, KISS (Keep it simple, stupid).

AuthorMichael Cantrell

Today, Facebook finally released an update to their iOS app, making it native. With that, they brought to an end the largest test-case for web views in native wrappers. I would say it was an astonishing failure. Facebook admitted as much in their post about the new app.

Facebook says that iOS users expected a "fast, reliable experience", and their app was "falling short". If an app isn't fast and isn't reliable, why would anyone use it? My friend Daniel is a great example, saying he preferred using the mobile website in Safari, because it was faster than the app. All of these assertions are true. I used the app on my iPad, because I didn't like having the facebook cookie in my browser. This provided me with a consistently terrible experience, and left me wanting to use the service less.

The new app released today is miles ahead of the previous app. The look is better. The scrolling is smoother. The app isn't completely useless without a network connection. I do still have some gripes though. For instance, when your news feed updates, it doesn't add to the top of the list, it resets the whole view and moves you to the top, losing your place. Bad design.

By extrapolating what Facebook has done, developers can see that native is inherently better than a web view. Not only is the app significantly faster, but it looks better. The app can do things a web view never could, like have sensible navigation. This is coming from a major company that can afford to spend the time to improve the web views. Facebook abandoned the idea because it wasn't tenable.

Web views don't make sense as replacements for native apps, because they don't provide the experience users are looking for. You can always fake it with Javascript. I've used Sencha Touch, and it gets incredibly close, but you don't get the fine grained control. You can't emulate things perfectly. Everything is just slightly off. Edging into the uncanny valley. Not to mention the gross performance issues.

Javascript is slow. It always will be. I don't care how fast v8 engine gets, it will never be as fast as native, compiled code. Javascript relies on a parser to convert to machine code. That conversion takes time, while compiled code runs straight through the processor. These performance issues will dog developers forever. They're even a problem on desktops, so why wouldn't they cause trouble on lower powered mobile devices.

A web view will never be as good as a native view. Facebook has proven that. It's time for developers to stop fooling themselves, so that we can stop fooling consumers. A web view in a native wrapper is false advertising. The consumer isn't getting a native app, so stop packaging your webpage like one.

AuthorMichael Cantrell

While looking at Hacker News today, I came across this article, "I bet you over-engineered your startup", by Swizec Teller. The post goes into the engineering decisions that must be made, primarily between many small, discrete services, and one monolithic service.

Small & Discrete VS Monolithic

For anyone that has had a large project before, the idea of small discrete services or tasks makes sense. If you have a to-do list of many easy to acheive goals, it is easier to move through them and feel like you're making progress. The same can be said of development. I can build 10 discrete services that are incredibly simple, but do their jobs well. Unfortunately, this falls into the trap of being great on paper, but bad in practice. Whenever a change has to be made to one of the services, it is likely that the other services will have to change as well. Additionally, as the system grows, the amount of message passing between services will likely increase. Then the system ends up spending more time talking to itself than actual users.

At the opposite end of the spectrum is the idea of a monolith service. Everything is self contained, creating one large failure point. This also makes development more difficult, as I have to know how the whole system works to do anything.

Teller reaches the conclusion that the services should be simplified by combining related pieces. There is still the separation of tasks, but the tasks are grouped. This is a great idea, and definitely where developers should end up. What worries me is that this wasn't done in the first place.

Don't We Know Better?

Shouldn't developers have the foresight to know that services should be grouped? Shouldn't things be engineered to be maintainable and efficient. For instance, if you are going to pull data in from external sources and store it for use later, shouldn't you have one system that pulls data from all of the services you want? I'm not saying one monolithic piece of code, but a queuing system that can handle different jobs. All of the data jobs are discrete, but run in the same sandbox, so to speak. Then you can build in redundancy, requeue on failure, logging, error notification, etc in one place. This gives you all of the advantages of discrete services without separating your data everywhere. You can run aggregate jobs without asking different services for data. This could be extended to many other use cases. Most are backend data processing, but that is where I have seen most bugs come from.

Much of the code I have seen in projects I have inherited showed a bad design sense. I'm not talking about visual design, though there was some of that, but the design of the services. It's not even differences in coding style, preferred tools, language, or any of that. I saw a job queue that would encounter errors, update the job status, and then archive the job. No notification, not larger logging system, nothing. There was no way to know something went wrong until you started looking for it. I've seen data fetches that just skip over bad results and continue as though valid data was recieved. I've seen web applications that have over 100 dependencies and no documentation to speak of.

Those examples come from large systems. I'm not talking about some site that gets a few hundred visitors a month, I'm talking about sites that get at least tens of thousands in a day. I know all of the excuses that developers can make for these as well. The first that springs to mind is that there wasn't enough time. I've even been there. The difference is, I stopped letting myself use that excuse.

No More Excuses

I think all developers should stop giving excuses. If something can't be done properly in the time given, say so. Don't be a dick about it, just tell the truth. The system cannot be built properly in the time available. Then you have to follow up with alternatives. What changes can be made so the system can be built properly? What can be taken out to give you time to put in proper logging? What can you do now to make sure you can save time on the next project? Build a logging framework you can use and like. Use an existing one. Find a queuing system you like, or roll you own. If you can do these things, you'll only make your job easier.

Inevitably, you'll get pushback from your manager/boss/employer. Be tactful and explain the situation. If they don't buy into it, leave. A good developer should have no problem finding a job right now. I get contacted by recruiters on LinkedIn all the time. AuthenticJobs is putting out new listings every day. has thousands of listings for developers.

Don't be afraid to do good work. Don't be afraid to go somewhere that will let you do good work.

AuthorMichael Cantrell

I recently ran into an issue with a flash uploader that took several days of off and on work to figure out. Hopefully my time investment can save you some of your own time investment.

To set the stage, we had a production system and a development system. Both had identical code. On production, the flash uploader worked perfectly. On development, the uploader immediately threw an error.

I started by checking the obvious. File permissions, PHP upload size and post size settings, and regular html file upload. All of those checked out.

Next I analyzed the HTTP requests, I could see that there wasn't even a call from the flash to the upload endpoint of our application. That's where things started to get interesting. We were using uploadify, but it was an incredibly old version, so updating was out of the question.

I took advantage of the callbacks in uploadify, and was able to get the JSON objects related to the upload. The only information I useful information contained in the JSON objects was an Upload Error #2038. After searching for this, I couldn't find any specific descriptions of what the error meant, until I lucked upon a mailing list. I can't even find the list anymore. It turns out, the problem was the SSL certificate.

On production, we had a valid SSL certificate. On development, we had a self-signed "snakeoil" certificate. This meant that on development the browser would notify you of the bad certificate, but you could just accept it and continue. With Flash, it really didn't want to do that. This is probably due to some of the strict security rules in Flash. Overall, it is a good thing that Flash rejects it, but it would be nice to have a bit more information about what is going on.

Long story short, if you're getting the upload error #2038 with a flash uploader, make sure your SSL certificate is valid for the domain you are using.


AuthorMichael Cantrell

Yesterday was the start of WWDC (Worldwide Developer's Conference) in California for Apple. As they do every year, the conference was kicked off with a keynote from Apple. Yesterday's keynote was important, because it was the first without Steve Jobs. I personally, think they did quite well, and anyone worried about Apple post-Steve shouldn't be.

In the keynote, a number of really cool things were talked about. From the software side, iOS 6 and Mountain Lion. Both look like great updates, each deserving of their own post. Then there were the updates to the Macbook Air line and the existing Macbook Pro line. Both were excellent. I think the most notable thing from the keynote is definitively the next generation Macbook Pro.

The next-gen Macbook Pro fits most of the rumors that were flying around pre-keynote. The Retina display, USB 3.0, thinner, SSD. All of these things are truly an insight into what Apple sees as the future of mobile computing. They even said so themselves. What interests me, is if you look at the whole, you can start to extrapolate where they might be heading.

For instance, I'm sure the retina display looks fantastic. Everyone that has written about it has said so. I tried to look at one today, but the Apple store I went to said they wouldn't have it until tomorrow. The big question in my mind is, why didn't one of the desktops get the retina display first? I would think it would be easier to get a retina display in an iMac, or at least justify the cost. Yes, the screen is larger, but there is more space to work with, so you don't have to completely reengineer the display casing. I think Apple has been focused more on their laptop lines than their desktop lines for some time now. I'm sure their sales data backs up that focus as well.

Going further, if you look at the ports available, you can see the same thought process. Two thunderbolt ports mean a big step up, when you can daisy-chain them. The goal here being that if you want to have a desktop, why not buy a thunderbolt display? That will provide you with an ethernet port (missing from the laptop), extra USB ports, and a thunderbolt port on the back of that. Plus, the display has a charging cable coming off of it. So when you want to sit down and use your desktop, you plug in the power and thunderbolt, and you're ready to go. With the second thunderbolt port you can connect another monitor or storage.

Storage is the other big step forward. The SSDs now start at 256GB and go up to 768GB. This is much more than the Macbook Air's 64GB. Most people won't need external storage anymore, and they get the bonus of SSDs having super-fast read speeds. Add in the fact that SSDs have no moving parts like standard drives, and yet another win for mobile computing. Your laptop won't feel sluggish compared to a desktop, so why get a desktop?

USB 3.0 is also only available on the laptop lines. Understandably, Apple may just be waiting to update the iMac desktops, but it's still an interesting development. It is almost universally odd for a laptop to get a feature before a desktop. I assume it's easier to add to a desktop, because you have fewer constraints compared to a laptop.

Obviously, this is all conjecture, but the lack of a meaningful update to the Mac Pro line could be another indication of the impending death of desktops. I have read articles about an email from Tim Cook saying that the Mac Pro will get an update next year, but that is still a ways off. MacRumors' Buyer's Guide currently shows a "don't buy" for all desktop Macs, with rumors of the aforementioned iMac and Mac Pro update next year. Given that timetable, who's to say the strides from the new Macbook Pro won't have trickled down making the laptops still a better buy than an update iMac desktop?

AuthorMichael Cantrell

Like the lowly caterpillar, a time of transition has arrived. Soon I will emerge as a beautiful butterfly!

All joking aside though, my time at What's Up Interactive will come to an end next week. Shortly after, I will soon start my new job in downtown Atlanta at Vitrue.

Vitrue helps companies manage their social presence using Vitrue Publisher, Tabs, Analytics, and Shop. Using these tools, companies are able to grow their followers on Facebook and do some really cool things. Lucky for me, it was also announced recently that Vitrue is being acquired by Oracle, and the deal will close this summer. I'm getting in at just the right time, when there is a lot to work on.

What excites me about the position is that I'll be working on a team bigger than any I've worked with previously. The largest I've been on to date was ten people, combining frontend, backend, and design. At Vitrue, the engineering team is over forty people. This means I'll have a great opportunity to learn from others, teach others, and grow my network. This will also be my first time working for a product company, which I'm excited about as well.

Working on products gives a developer the ability to go deeper into a technology than they can with short term client projects. I'll get to really work to make the product great, and take the time to work on the details. With client work, you get a great breadth of knowledge doing many things, but don't get to spend as much time refining pieces. Both have their benefits, but both have their pitfalls. Product work can lead a developer to having too specific a skillset, but as long as you are aware of this, it can be avoided.

Beyond just the cultural opportunities, I'll have some interesting development challenges. Vitrue uses Ruby on Rails and soon Oracle databases, both of which I've never worked with. They've trusted me enough to learn Rails quickly, which shouldn't be a problem. I've already started delving into it, and there are definitely some nice libraries/apis available. Active Record looks to be a great object relational mapper, and I'll be interested to see how Vitrue transitions from MySQL to Oracle databases.

Even with all of these new opportunities, it is important to remember how I was able to get here. I owe a lot to What's Up Interactive and the people I work with. They hired me out of college and gave me some fantastic development opportunities along the way.

During my time I was able to get into version control systems with Subversion. I lead the push to start using Zend Framework for our more complicated applications. They gave me the opportunity to learn Objective-C and start developing iPhone and iPad applications. Once I had that knowledge, I was able to build 270toWin and win an Addy Award, be a Webby Honoree, and be featured by Apple. From there, I took over most of the hosting duties and learned how to use Amazon Web Services. With that knowledge, I set up servers to handle the largest lottery jackpot in North American history for the Georgia Lottery. All of these were fantastic opportunities for me to learn and shine.

The decision to move was a tough one, but when presented with such a great opportunity, I couldn't turn it down. My co-workers at What's Up have been very supportive. They're a great bunch of people, and I wish them the best. I'm proud of what we accomplished together, and I'm sure they will continue to be successful.

AuthorMichael Cantrell

My cousin graduated last week and asked me if I could get some resources together for him to learn to program. He's interested in taking it as a major in college, but doesn't know if he'll like it. Hopefully these will help him out. If you're in the same situation, these can obviously help you out as well.

Code School is one resource I am currently using to learn Ruby. They have great courses where you watch a video discussing a topic and then have an interactive session in the browser that validates what you have done. For absolute beginners, it is recommended that you start with the Try Ruby course.

Code Academy is another great resource. They focus more on frontend web development, which could be seen as more of the presentation that the user sees. This is opposed to the backend which deals more with the storage and processing of data. This is still a great way to get started and dip your toes in the programming pool.

Dive In

That expends the actual teaching sites I know of. The other way is to just dive in and start with some tutorials. Nettuts+ is a great site, and they even have a PHP tutorial track. I think PHP is a great way to get started, since it is a pretty forgiving language.

The key is to learn whatever you can whenever you can. Ignore the common refrains of "You shouldn't learn X, you should learn Y instead. It's a better language." Learn both and decide for yourself.

My last bit of advice, start a blog. Then blog about the things you're learning and the troubles you've encountered. You'll be able to look back and see how far you've come. Plus, you may help someone else along the way.

AuthorMichael Cantrell

I saw this article the other day talking about Nvidia's Kepler chips and how they will "serve up a desktop experience from the cloud". This whole idea is crazy to me.

Don't get me wrong, I'm sure the Kepler chip is awesome. Based upon what is said about the chips themselves, they seem to have some massive power behind them, and they will be incredibly helpful in software where you can make use of the GPU. I just don't buy in to the whole stream your desktop from the cloud idea.

The premise is just a bad one. What happens if your network is down? What happens if the cloud goes down? What happens if you have some latency issues? Network is slow? Bandwidth caps? You get the idea.

Then Rob Enderle takes it a step further and asks "what if you could run Windows on a Mac, or an iPad, or anything that would host a tiny client". Another terrible idea! I've used remote desktop software to access a windows desktop from my iPad, and it was not a great experience. Windows is not built for touchscreens. That is the whole purpose behind Metro for Windows 8. Microsoft has already tried that, and it didn't work. Also, if you want to run Windows on your Mac, you can. Use VMWare or Parallels or any other virtualization software. You could even use Bootcamp and have a full install to go to.

The concept of having a central operating system with a thin client on your device is a great one. There are just too many problems with it. I think having a central server with preferences that sync to devices is a better idea. That way you can store the preferences on the device, and they'll update when they can. This is the same concept behind iCloud and Dropbox. Dropbox is specifically for files, of course, but it has been used that way for BBEdit's preferences. Using this sort of setup means your device isn't useless when the internet is down. Not being able to play Angry Birds just because the internet is down would be incredibly frustrating. I was always mad when I couldn't play Half Life because I couldn't get on Steam.

Later in the article Rob Enderle does get in to some use cases that are great. Modeling galaxy class events where you are tracking millions to billions of objects over a billion years. This kind of computation can be of enormous use to scientists. The datasets that are being worked with now are on that kind of scale, so a chip that can process them quickly is great.

Finally, Rob Enderle goes into possible use in robotics. This is another frontier that can use this sort of processing power. I personally believe in order to make AI approaching our intelligence we will need massive parallelism. The human brain is performing hundreds if not thousands of concurrent "threads" to keep us alive. A robot will need the same amount of computation to stand, walk, and talk.

Looking to the future of what we can do with our most advanced technology is great, but you have to stay realistic. Robots and massive dataset modeling are feasible. Streaming operating systems are great, but we just don't have the infrastructure to support it.

AuthorMichael Cantrell

I just read Martin Fowler's ORMHate post, and I must agree with him. I was skeptical the first time I used an Object Relational Mapper, but once you get used to them, they are fantastic.

Just a brief review of what an ORM is. The basic function is to convert a relational table database into objects automatically for use in your application. So your user table with it's associations to user posts (for instance), can be turned into a user object with the ability to fetch user post objects. This is great, because it avoids the need to spend large amounts of time creating repetitive code to access your data and turn it into objects. As Martin pointed out though, you do need to put in some time to make it work exactly right with your database. Once you have this interface, you can create an application much faster, because you don't have to spend so much time building the base access layer.

I do think that Martin missed a few points that are important though. There are several pitfalls that I have seen that a developer needs to be aware of.

The first and most important, in my opinion, is being aware of what the provided calls actually do with regards to the database. For instance, how exactly does the count call work? Does it get all of the objects from the database and then count them, or does it use the built in count function in your database? A good ORM would use the latter, but developers should find out for sure. Another example, and unfortunately I can't find the article, comes from an article I read about someone profiling their application. They noticed there were two nearly identical calls to their database. The first was to find out if there were any records meeting a certain criteria. If there were, then the second call would fetch those and return the data. Obviously, you're making two calls for the same data, but they didn't know the check worked that specific way. The solution ended up being a flag to tell their system to return an empty array when no results were found, instead of false or null.

A second pitfall would be forgetting how to write basic SQL or just interact with a database. I think all developers that are writing code interacting with a database should feel confident working in a SQL command-line. Not for bragging rights or anything along those lines, but because it will help you write better queries. Knowing how long your queries take, how your database is actually laid out, and what unnecessary data you may be getting can lead to optimized queries, saving money. If a developer can use the database less or at least more efficiently, the database doesn't have to scale up as much. With one of the clients I worked on, I was able to take another developers query from 5 seconds down to 200 milliseconds by removing a join that was unnecessary in a certain context. That's a huge savings in resource usage. (The 5 seconds mainly comes from an awful database design, but that's a whole other post).

One final pitfall is when your original developer leaves, and the existing/new developers are only left with an ORM. They may have little understanding of why certain database design decisions were made, or be completely unaware of them. Again, I encountered a system that had two ways of determining whether a user was on the email list. One of them was a legacy from before even the previous developer, but I wasn't made aware of it when he left. The obvious opt in flag was inaccurate for those accounts. Confusion ensued, because the ORM did not account for those columns. They were set to not be included in the models. Suffice to say, I thought they were deprecated/legacy and had already been dealt with. Surely they were just there because no one had bothered to drop the column. Not to mention the column name was less than descriptive. This could all be written up as a lack of documentation, which is true, but it is still something that should be taken into account with ORMs.

Definitely don't take these as reasons to not use an ORM. They are well worth it. These are just warning signs on the road to make developers aware of things to think about. "Bridge ices before road" seems obvious, but you would be surprised how many people don't think about it.

AuthorMichael Cantrell

As you can see, I have some reading ahead of me. I finally got around to ordering the A Book Apart books that I wanted, now comes the time to read them. I'll be posting reviews of each as I finish them, and I'm hoping to learn a lot.

AuthorMichael Cantrell

I may be a little late to the show, but I just read about Blueseed the other day, and I had to say something about it. It seems that some startup founders are frustrated that they can't get the best people in the world working for them, because they're having trouble getting Visas for them to work in the United States. In order to solve this, someone had the amazing idea of just doing business in international waters. The plan is to get a cruise ship and load it up with entrepreneurs. Then sail the ship out to sea and start business.

The first thought that came to mind for me was how crazy these people are, but the more you think about this, it starts to make sense. I assume getting a US Visa is difficult. I've read about difficulties, so I assume this to be true. However, if you look beyond that issue, there are other reasons why this is an amazing idea. Imagine you are working on a new company, and you are facing difficult problems. Who do you want to be surrounded by? Possibly other people doing exactly the same thing. Maybe people that have already been through the problems your having. Maybe even someone that is building a company to solve the very problem you're having.

Blueseed seems like it may actually be entrepreneur nirvana. Come out on a cruise and surround yourself with highly-driven people. People that are trying to accomplish great things. People that have high expectations of themselves. People that you can learn from, and people that you can teach. This takes the idea of a startup incubator to a new level.

If I had a company that needed this, and I had the money, I would sign up in a heartbeat.

AuthorMichael Cantrell

I have read a lot recently about Node.js, and it seems to be the flavor of the week right now. The gist of Node.js is that it is Javascript on the server, and it uses a "event-driven, non-blocking I/O model that makes it lightweight and efficient". That probably doesn't mean a lot to most people, even many programmers. Suffice to say that when you run a command, you don't wait for it to complete, hence "non-blocking".

As an aside, I'm going to completely brush over the fact that Javscript is slow. Let's pretend it's as fast as other web languages (it's not).

At first glance, that seems like a great way to write software. Waiting for a disk is the slowest action on a computer these days. The typical time scale for a call to the harddrive is at least a factor of 10 slower than other commands. That's exactly the kind of thing you would want to avoid. Here comes the pitfall though, didn't you make that call for a reason? Don't you need the result of whatever you just asked for?

After reviewing every web-app I've ever written (or at least the ones I remember), I couldn't think of any instance where an intensive call was made that didn't require the result. If I make a call to a database to get some data, I'm planning on using it and displaying to the user. So that begs the question, when would you not care about the function result? I can only think of a few instances.

The most obvious are calls where you are logging information to non-volatile storage. Let's say I update my account, you may want to log what the previous values were and the new values. This gives you a "paper" trail to follow. While this is a legitimate use, I can shoot holes in it as well. Should I really completely rewrite my program for something so simple? Why not just make sure the logging system is fast? SQL Inserts are fast. Increase the RAM on you database server. Make sure your database is tuned properly. Queue your logging calls so they run after you've finished output and flush the output buffers.

Another would be when using WebSockets. WebSockets are great, because they give you a persistent connection to the server. So requests can flow in real-time, rather than on a polling schedule using AJAX. Running Javascript on the server and client side would also make sense. You can easily pass JSON back and forth as needed. The main problem here is that WebSockets are currently only supported on three browsers. Going beyond that, what data are you actually going to be passing? Primarily data going into a database most likely. Here we are again, stuck waiting on a database, but this time we have to have the result. So I'm not speeding up the request time, I still have to wait for the database to return data. All Javascript gives you is some horrendous code. I've heard it referred to as the "pyramid of death". Basically you get buried in callbacks, giving a pyramid shape like so:

getData(param, function() {
    processData(param, function() {
        sendData(param, function() {

Now just imagine that actually did something useful. You can easily see how bad things can get.

Those are the best examples I can come up with for a time when Node.js might be useful. It was even talked about at the ConvergeSE conference during one of the workshops. I came away feeling as though all of my complaints had been validated. The speaker showed some interesting applications, but none of them needed to be in Node.js. They could have been in Ruby, PHP, Python, C#, etc. I've even spoken to people that have used Node.js in production, and they openly admitted that it was a complete mistake, and they were working on replacing it. Turns out, it was slower than Ruby on Rails.

Either way, if someone can show me a use that makes sense, I'll be glad to look at it. In the meantime, Node.js is just a hip fad being pushed by people that want something new to talk about. People that like buzzwords, in this case "non-blocking", "event-driven", and "real-time".

I'll close with a wonderful video summarizing the problems with Node.js with humor. Also, a little NSFW due to language.

AuthorMichael Cantrell
4 CommentsPost a comment