Mindshare Strategy http://mindsharestrategy.com Tell your own story ... Sat, 14 Jan 2012 00:24:41 +0000 en hourly 1 http://wordpress.org/?v=3.4-alpha-19719 Domains, Registrars, and DNS … oh my! http://mindsharestrategy.com/2012/domains-registrars-and-dns-oh-my/ http://mindsharestrategy.com/2012/domains-registrars-and-dns-oh-my/#comments Fri, 13 Jan 2012 20:35:34 +0000 Eric http://mindsharestrategy.com/?p=3953 Read More]]> About a year ago, I bought a short url so I didn’t have to use tinyurl.com or bit.ly on Twitter.  It’s a .me address, and at the time the best/cheapest .me registrar was GoDaddy.

I don’t host sites there, but I registered the domain there anyway.  Then I immediately pointed the domain at my old shared hosting account over on 1and1.

Things worked perfectly!

After a few weeks, I decided to add an email at this short url.  But I didn’t want to use 1and1′s email system, so I set up Google Apps for the domain and pointed my MX records at Google.

Again, things worked perfectly!

Then … two weeks ago I started getting inundated with spam.  Lots of spam.  I got about a hundred or so “Out of office” emails an hour from a mailing list in Russia.

All in response to an email that seemingly originated from my address.

Turns out, some spammer was using my email address in the “from” field of their messages.  As a result, I got all of the “unsubscribe” and “out of office” responses.

One system even blacklisted my email address as a known spammer.  I didn’t find out until several angry client phone calls – why haven’t you sent us the code we paid for?!

Simple Solution

Google recommends setting up authentication for your email to prevent this.  It’s actually pretty easy.  Just add a TXT record to your DNS with some encrypted strings that Google knows.  Then email recipients can verify that messages appearing to be from you are actually from you.

Well, simple if you don’t use 1and1.

You see, shared hosting accounts with 1and1 use a very basic DNS system.  You can use their name servers, or remote name servers.  You can use their MX records or set your own MX records.

But you can’t add a TXT record.


I was one of the many who jumped ship when GoDaddy announced their (shortlived) support of SOPA.  I quickly moved my email domain (and several other domains) over to Namecheap.

But to keep things simple, I had left the DNS in place – still pointing at 1and1.

To add a TXT record, I moved my DNS registration to Namecheap for my .me domain as well.  I figured it would be easy enough:

  • Keep the domain registered with Namecheap
  • Register the DNS with Namecheap
  • Set the TXT record I needed for authenticated email
  • Set an A record to point back at my 1and1 hosting account

And sure enough it worked!  The spammy emails stopped.  My sites still worked.  Everything was happy.

Until I got the email …

The Problem

We at 1&1 Internet have noticed that you have changed the name server of your eam.me domain. Your new settings are:

DNS1: dns3.registrar-servers.com
DNS2: dns2.registrar-servers.com
DNS3: dns5.registrar-servers.com
DNS4: dns4.registrar-servers.com

Because of these new settings, your website hosted with 1&1 Internet can no longer be reached via the eam.me domain. The e-mail addresses included in your Developer Package, if any, have also been disabled.

If you still intend to keep using our services, you can enter the following name servers with your registrar by February 23, 2012 at 11:46:00 PM and continue using our services as usual.

DNS1: ns51.1and1.com
DNS2: ns52.1and1.com

If by February 23, 2012 at 11:46:00 PM you have not registered with our name servers, we will remove the eam.me domain from our systems. If you want to use your domain with your 1&1 Package after February 23, 2012 at 11:46:00 PM, you can specify this configuration on the Control Panel once more and enter the name servers specified there with your registrar.

Yeah … this doesn’t work for me.  The applications (namely YOURLS) are still running on my shared hosting system over at 1and1.  But I need the DNS running through Namecheap so I can keep the TXT entry available for email authentication.

I tried one workaround – setting a CNAME record for the domain to point to my 1and1 account – but this ate my MX and TXT records as well.

At the moment (assuming no one offers a better alternative) it looks like I’ll just need to move my applications to a different system entirely.  That is not an optimal solution … and this entire c****** f*** has me wanting to dump 1and1 altogether.


I emailed 1and1 a potential workaround that I found in their own knowledge base.  Essentially, it recommends setting the external domain, pointing it at an external DNS system, then pointing the external DNS system back at 1and1.

My email was along the lines of “I found a solution posted on your site, had you directed me here in the first place, it would have saved us all a lot of time.”

They responded … by directing me to the exact same link:

Dear Eric Mann, (Customer ID: XXXXXXXX)

Thank you for contacting us.

We advise you to use the method on the link below.

How do I use my own name server for a 1&1 domain?


The technical subdomain for your account is sxxxxxxxxx.onlinehome.us

If you have any further questions please do not hesitate to contact us.

> For the record, I have found a way to make this work. According to your
> own knowledge base, it *is* possible to host a site with your system while
> the domain and DNS is registered elsewhere:
> http://faq.1and1.com/domains/domain_admin/dns_settings/18.html

I’m not sure why … but this irritates me even more than the original fiasco.

http://mindsharestrategy.com/2012/domains-registrars-and-dns-oh-my/feed/ 0
Post Supplements – A Concept http://mindsharestrategy.com/2011/post-supplements-a-concept/ http://mindsharestrategy.com/2011/post-supplements-a-concept/#comments Fri, 23 Dec 2011 03:37:00 +0000 Eric http://mindsharestrategy.com/?p=3928 Read More]]> A few months ago, WordPress UX Lead Jane Wells posted a request to WordPress’ Trac ticketing system.  The idea was to find a better way to insert “stuff” below WordPress posts:

Inserting the sharing and like rows at the bottom of the post text before the byline/classification metadata seems wrong. It should go below that, so it is closely related to commenting, not part of the content itself. The plugin-generated widget is not “by” the post author, after all.

I haven’t used very many social media plugins for exactly this reason.  Nor have I ever used a “related posts” plugin.  They always seem to conflict with one another and build up a bunch of unnecessary cruft below my content.

So for the past few months, I’ve been thinking about different ways to handle this.

The Art of Manliness adds an author box, a Facebook "like" button, a related content gallery, and a subscription feature to the bottom of each post.

Template Parts

My first idea was to just use a template region within a WordPress theme.

Each individual theme would call some variety of get_template_part() to set up whatever region is being used.

Plugins would then provide content for these templated regions.  So get_template_part('social_media') and get_template_part('related'), for example.

The problem with this, though, is one of standards.  What template regions will be supported?  How will new ones be developed?  After the battle over post formats, this isn’t a particular standard I want to battle.

Post Intents

Another developer suggested modelling  system after the emerging Web Intents standard.

Basically, a new function would be added to WordPress to output various registered post intents – share this, “like” this, subscribe to updates, etc.

Individual plugins would then register these intents and their various actions, but leave it to the theme to style the presentation.

I was entirely sold on this idea.  Well, until a long Twitter conversation with Helen Hou-Sandi:

@ Thinking in CMS-land, I could see using that template tag for a lead form, or a gallery, or something not sharing.
Helen Hou-Sandi

When you take things like photo galleries, related posts, and actions other than social media integration into account, the concept of post intents no longer makes sense.

New Action Hooks

Chris Pearson add a personal "follow me" Twitter link, a "Tweet this" link, and several other action items below his posts.

I love having a large number of action hooks to use when I’m building a theme.  I can move content around, add custom views to my content, manipulate the display.  The sky’s the limit.

So when several developers suggested that we just add a few action hooks before and after the post content, I was intrigued.

But really, this is what themes are already doing.  And merely adding a few extra action hooks just gives plugin authors the ability to inject their own markup into the flow of your otherwise well-built design.

Considering some of these add-junk-to-the-bottom-of-my-content plugins already break the display, why would they function any different if I gave them a specific hook to tie in to?

My Proposal – Post Supplements

Instead, I have in mind a hybrid of a new action hook and registering a new object: post supplements.

Think about widgets for a second.  There’s a specific area to display widgets (the sidebar), each widget is registered with WordPress and placed in this area, and a well-coded widget leaves much of its markup to the theme (before_widget and after_widget).

So think of two things:

  1. A new object defined by WordPress: WP_Supplement
  2. A new action hook/function used by WordPress to output registered supplements in the theme

The theme can register supplements (one for sharing, one for related posts, one for a photo gallery, one for an about-the-author box, etc).  Various plugins can also register supplements.

Then, just like with widgets, these supplements can be added to the theme.

Widgets use add_action( 'widgets_init', create_function( '', 'register_widget("Foo_Widget");' ) );

Supplements could use add_action( 'supplements_init', create_function( '', 'register_supplement("Foo_Supplement");' ) );

Decisions, Not Options

I don’t envision a UI for this feature.  Everything can be handled programatically – supplements appear on screen in the order in which they were added to the system.  Likewise, they can be unregistered the same way.

This keeps the WP interface lean and mean, but still gives us a more efficient way to add content to a post in exactly the place the theme designer intends: do_action( 'post_supplements' ) or wp_post_supplements().

The theme designer would be in complete control of the position of these elements, what styling (if any) separates them from the content of the post, and what styling (if any) distinguishes them from one another.

This would be non-trivial to build, so before I dig in to the code I want to solicit feedback on the idea.

Does this make sense from a usability standpoint?

http://mindsharestrategy.com/2011/post-supplements-a-concept/feed/ 10
Apple Patents Multi-tasking http://mindsharestrategy.com/2011/apple-patents-multi-tasking/ http://mindsharestrategy.com/2011/apple-patents-multi-tasking/#comments Wed, 21 Dec 2011 18:00:39 +0000 Eric http://mindsharestrategy.com/?p=3924 Read More]]> The Apple-vs-Google-vs-EveryoneElseInTheWorld patent battle has gotten beyond ridiculous.

I just read that Apple’s latest patent victory is for:

A portable electronic device displays, on a touch screen display, a user interface for a phone application during a phone call. In response to detecting activation of a menu icon or menu button, the UI for the phone application is replaced with a menu of application icons, while maintaining the phone call. In response to detecting a finger gesture on a non-telephone service application icon, displaying a user interface for the non-telephone service application while continuing to maintain the phone call, the UI for the non-telephone service application including a switch application icon that is not displayed in the UI when there is no ongoing phone call. In response to detecting a finger gesture on the switch application icon, replacing display of the UI for the non-telephone service application with a respective UI for the phone application while continuing to maintain the phone call.

In plain English – Apple has patented the feature where you can switch to another app on a device while maintaining a phone call.  Considering that the phone system on an iPhone is, itself, an app on the device this means that Apple has laid the foundation for patenting the ability to switch from one app to another without turning either one off.

Haven’t we been doing this for years already?

Does the fact that one of the apps is a phone app really make this a unique, non-obvious invention or innovation?

I would argue no.  And even though I am not a lawyer, I stick by that argument.

Nothing in Apple’s patent application seems to be anything more than an “ante-in” offering for any sophisticated electronic device.  The fact that they can now, legally, prevent other software developers from releasing a feature we already have, use, and expect from even entry-level offerings is, in a word, laughable.

My prediction: Apple’s next patent attempt is for a handheld computer capable of also making phone calls.  Really, that patent would be no less legitimate than the one they were just awarded …

http://mindsharestrategy.com/2011/apple-patents-multi-tasking/feed/ 4
Should Free Software Have Free Support? http://mindsharestrategy.com/2011/should-free-software-have-free-support/ http://mindsharestrategy.com/2011/should-free-software-have-free-support/#comments Fri, 16 Dec 2011 15:38:10 +0000 Eric http://mindsharestrategy.com/?p=3920 Read More]]> I do professional (paid) consulting for WordPress.  But I also write and distribute free plugins and themes for WordPress.  My paid business depends a lot on my reputation on the free side of things.

And that’s where I face a dilemma.

A lot of people use my free stuff.  And several of them come to me from time to time asking for new features, bug fixes, or just regular “I can’t figure this out” support.  Up ’til now, I’ve offered that support for free.

And that’s proven to be a bad idea.

So my question to you, how much is reasonable to charge for ongoing development and support?

Please complete the following survey to share your thoughts on how much (if anything) is reasonable to charge for support, on-going development, and feature requests when it comes to open source software. 1

Complete the survey through Google Docs

To say “thank you” I’ll be giving away a handful of Amazon.com gift cards to those who complete the survey.  How many I give away and the exact amount on each card will depend on how many people complete the survey.


  1. For the record, I will not stop giving away free software.  I’m just considering a few different ways I can continue to earn a living while doing it.
http://mindsharestrategy.com/2011/should-free-software-have-free-support/feed/ 10
Finally! Microsoft Decides to Auto-Update IE http://mindsharestrategy.com/2011/finally-microsoft-auto-updates-ie/ http://mindsharestrategy.com/2011/finally-microsoft-auto-updates-ie/#comments Thu, 15 Dec 2011 17:21:40 +0000 Eric http://mindsharestrategy.com/?p=3916 Read More]]> I’ve always been a fan of the way Chrome automatically updates itself.  New features come on line as soon as they’re ready and much of the emerging HTML5 standard just works.

Even Firefox has changed their release schedule to push out updates more frequently.  And for those of you who don’t know, when Firefox ships a new version, they cut off support for older versions.

From a web development standpoint, this is fantastic.  It means you can use cutting-edge technologies as soon as they’re ready and you don’t need to worry about supporting clunky, legacy browsers.

Unless you’re supporting Internet Explorer.

Well, until now.

Microsoft announced today that, starting next year, they will “automatically upgrade Windows customers to the latest version of Internet Explorer available for their PC.”

My thoughts on this development?  It’s about ****ing time!

IE to Start Automatic Upgrades across Windows XP, Windows Vista, and Windows 7

Everyone benefits from an up-to-date browser.

Today we are sharing our plan to automatically upgrade Windows customers to the latest version of Internet Explorer available for their PC. This is an important step in helping to move the Web forward. We will start in January for customers in Australia and Brazil who have turned on automatic updating via Windows Update. Similar to our release of IE9 earlier this year, we will take a measured approach, scaling up over time.

As always, when upgrading from one version of Internet Explorer to the next through Windows Update, the user’s home page, search provider, and default browser remains unchanged.

[Read the rest of Microsoft's announcement on the Windows Team Blog ...]

http://mindsharestrategy.com/2011/finally-microsoft-auto-updates-ie/feed/ 0
I’ve Still Got It! http://mindsharestrategy.com/2011/ive-still-got-it/ http://mindsharestrategy.com/2011/ive-still-got-it/#comments Sat, 10 Dec 2011 17:36:25 +0000 Eric http://mindsharestrategy.com/?p=3912 Read More]]> Every now and then, someone asks me whether or not I still have quality WordPress development skills.  I think it’s a fair question.  After all, I spend the bulk of my time now working with closed-source ASP.Net projects and have little time for my favorite WordPress stuff.

But really, much of what I do in the .Net arena is pretty transferable.  And – this is me bragging a bit – I’m a good developer no matter what language or paradigm I’m working with.

There’s been a lot of talk about WordPress 3.3 coming out soon.  And a lot of that talk has been about the number of contributions and contributors to the project.  I’m proud to say that I’m in that group – I’ve had a patch in every major version of WordPress since version 2.8!

And I want to show that off.

You might notice a “Coding Credibility” section on my sidebar.  I’ve got Stack Overflow widgets, Ohloh widgets, Smarterer stats … but today, I polished off a new addition to that area – the WP Core Contributions Widget.

I saw a nifty list of contributed core patches on another developer’s site.  I wanted to steal it and throw it up on mine, too!  But apparently she hand-coded the widget for the sidebar.  Effective, but I’m too lazy for that.  So instead, I sat down and whipped up a plugin to do it for me.

WP Core Contributions Widget

This plugin scrapes the WordPress Trac site for any mention of “props ericmann” (or “props bob” or “props coolcoder” … whatever your username might be).  It then strips the search results down and extracts a changeset ID, a Trac ticket ID, and a commit message.

The IDs and links to the appropriate changesets/tickets are then displayed in a list in the sidebar.

Themable Widget

On top of that, the template for the widget is theme-able!

I’m using a pattern I adopted from another fantastic WordPress developer.  Basically, there’s a widget template file bundled with the plugin.  When the widget loads, it first checks to see if you’ve added your own template to your theme.  If you have, it loads that one.  If you haven’t, it loads the bundled one.

So if you want to use something other than an HTML unordered list (or if you want to add classes to better style the list), you can do that without hacking the plugin!  It’s a vast improvement on my previous work.

Next Steps

The plugin could still use a bit of work.  I literally wrote it in an hour or two, so there’s bound to be room for improvement.  Just off the top of my head:

  1. The plugin only scrapes the first page of search results.  So if you’ve written more than 10 patches, it will still only display the latest 10.
  2. The plugin searches for “props {username,}” but if your username isn’t the first in a list of multiple contributors, you won’t show up.
  3. I’ve created a .po file, but have yet to translate any of the plugin’s strings to other languages (seriously, there are 5 … should be easy).

So is it perfect?  Not yet.  But it’s ready for a public beta test!

Get The Plugin

For now (until I’m ready for a 1.0 version that solves at least the first 2 of the above issues), the plugin will be located on GitHub.

If you’re a developer, I’d love if you’d clone the repository and maybe take a stab at fixing one of the outstanding issues.  Send me a pull request and I’ll take a look at your work.

If you just want to use and test the plugin, the latest version is bundled for download as a ZIP file.  Just follow the standard manual installation instructions (also outlined in the readme) to get started.

And if you run in to any problems at all, leave me a note here or open an issue on GitHub.

http://mindsharestrategy.com/2011/ive-still-got-it/feed/ 0
WordPress Portland http://mindsharestrategy.com/2011/wordpress-portland/ http://mindsharestrategy.com/2011/wordpress-portland/#comments Tue, 29 Nov 2011 02:56:44 +0000 Eric http://mindsharestrategy.com/?p=3910 Read More]]> As promised, here is the code for my demo of adding feature pointers to WordPress in version 3.3

And again, please do not use these in distributed plugins/themes.  They’re only slated for Core at the moment, but if you feel that they’ll help in your custom theme/plugin development with clients, feel free!

Plugin Name: WordPress Portland Meetup Pointer Demo
Plugin URI:
Description: Demonstrate feature pointers in WP 3.3
Author: Eric Mann
Version: 1.0
Author URI: http://eamann.com

add_action( 'admin_enqueue_scripts', 'pdxwp_pointers_header' );
function pdxwp_pointers_header() {
    $enqueue = false;
    $dismissed = explode( ',', (string) get_user_meta( get_current_user_id(), 'dismissed_wp_pointers', true ) );
    if ( ! in_array( 'pdxwp_pointer', $dismissed ) ) {
        $enqueue = true;
        add_action( 'admin_print_footer_scripts', 'pdxwp_pointers_footer' );
    if ( $enqueue ) {
        // Enqueue pointers
        wp_enqueue_script( 'wp-pointer' );
        wp_enqueue_style( 'wp-pointer' );

function pdxwp_pointers_footer() {
    $pointer_content = '<h3>Welcome WordPress Portland!</h3>';
    $pointer_content .= '<p>This is an example of an admin pointer.</p>';
    $pointer_content .= '<p>You can use it in your <a href="http://wordpress.org/extend/themes">themes</a> ';
    $pointer_content .= 'and <a href="http://wordpress.org/extend/plugins">plugins</a>.</p>';
<script type="text/javascript">
jQuery(document).ready(function($) {
        content: '<?php echo $pointer_content; ?>',
        position: {
            edge: 'left',
            align: 'center'
        close: function() {
            $.post( ajaxurl, {
                pointer: 'pdxwp_pointer',
                action: 'dismiss-wp-pointer'

http://mindsharestrategy.com/2011/wordpress-portland/feed/ 0
Security Vulnerabilities http://mindsharestrategy.com/2011/security-vulnerabilities/ http://mindsharestrategy.com/2011/security-vulnerabilities/#comments Tue, 22 Nov 2011 04:51:00 +0000 Eric http://mindsharestrategy.com/?p=3893 Read More]]> Out of the blue today, a user of one of my plugins contacted me to ask why I was so slow in patching a security vulnerability in my system.

The question came as a complete surprise.

Apparently, back in January, someone discovered a potential security hole in one of my plugins, WP Publication Archive.  The frightening thing about the report, though, was the fact that he never bothered to report the vulnerability to me so I could fix it.  Instead, an open report sat there on his site, and was then picked up by a few other security sites and syndicated across the Internet.

Had this user not contacted me, I would never had known about this issue.  And I can’t fix something if I don’t know it’s broken.

The Hole

WP Publication Archive uses a proxy file to load a remote file as an attachment so it can be downloaded by the browser.  Here’s the entire source of the “vulnerable” file:

if ( ! isset($_GET['file']) )

$mime = new mimetype();

$fPath = $_GET['file'];
$fType = $mime->getType( $fPath );
$fName = basename($fPath);

$origname = preg_replace('/_#_#\d*/','',$fName);

$fContent = fetch_content( $fPath );

output_content( $fContent, $origname );

function fetch_content( $url ) {
    $ch = curl_init();
    curl_setopt( $ch, CURLOPT_URL, $url );
    curl_setopt( $ch, CURLOPT_HEADER, 0 );


    curl_exec( $ch );
    curl_close( $ch );

    $fContent = ob_get_contents();


    return $fContent;

function output_content( $content, $name ) {
    header( "Expires: Wed, 9 Nov 1983 05:00:00 GMT" );
    header( "Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT" );
    header( "Content-Disposition: attachment; filename=" . $name );
    header( "Content-type: application/octet-stream" );
    header( "Content-Transfer-Encoding: binary" );

    echo $content;

In reality, there isn’t a security hole here.  It was reported that this file would include a remote file, thus opening your site up to the possibility of remotely executing malicious code (along the same vein as the TimThumb exploit from several weeks ago).

But this is not the case.

Nothing is included from the remote file, it only downloads the file from within the browser.  So the worst you could do is intentionally download a malicious file as an attachment.  But it also does open your site up to being used as a remote proxy for files you don’t want passed through your server.

So needless to say, it did need to be patched.

So I took some time tonight to sit down, fix a couple of other bugs, and add some code to quickly plug that hole.  It’s not a permanent fix, but does prevent the immediately apparent proxy exploits.  A more elegant patch is already in the works.

Why I’m Upset

But the issue here isn’t really the seemingly insecure code, or even that I had to spend extra time fixing the code.  It’s that a vulnerability was detected and, rather than being reported to me, was posted in a public forum.

For those of you who know, the standard process for reporting any potential vulnerability or exploit is to contact the original author and give them a reasonable amount of time to create a patch before disclosing a 0-day hack to the rest of the world.  Developers need a chance to take action to protect their users’ sites.  It’s common courtesy in just about every developer community in existence.

So I contacted every site that posted information about this security hole, expressed my frustration that they posted it without contacting me, and asked what could be done in the future to prevent this kind of communication breakdown.

One of the reporters, who is also a respected member of the WordPress community, got in touch with me immediately.  He was only recycling a notice he’d seen on another site and didn’t realize that I hadn’t been contacted by the original author.

The original reporter, however, was somewhat less useful:

[O]ur policy is immediate, full disclosure. We also release a variety of tools to aid in the security testing process. It must be stressed that the burden of writing secure code is that of the developers. We encourage you to learn more about secure programming practices.

Am I responsible for writing secure code?  Absolutely.  And I test all of my code to the best of my ability before I release anything.  But just like every other developer on the market, I make the occasional mistake.

Nobody’s perfect.

Had I been notified about this bug and given time to fix it, I would have.  And in doing so, I would have helped make several sites somewhat more secure.

The Real Issue

The real problem here isn’t the security vulnerability, or even how it was reported.  It’s how the community as a whole functions.

A few years ago, I discovered a nifty WordPress plugin on Google Code that let you create a file/document archive on your site.  I loved it!  I used it on my sites, on client sites, and talked it up at WordCamp.

Then, in 2007, the original author abandoned the project.

The next version of WordPress introduced changes that broke the plugin.  So I took the time to re-write it, clean up the code, and eventually update it to use Custom Post Types.  I placed my fork of the plugin in the official plugin repository and all was good.

This is how the community is supposed to function.  Developers help one another out.  If one guy can’t keep going on a project, someone else can step in and take over.  That’s the beauty of open source.

The problem with the security vulnerability wasn’t actually with my version of the plugin, but was reported against the pre-fork version hosted on Google Code.  I’ve been able to verify that the current release doesn’t suffer from the same issues, and have proactively patched the one issue that might crop up as a result of this report.

Unfortunately, the original reporter didn’t contact me about the issue, so I couldn’t clear things up.

And subsequent reporters actually linked the report back to my post-fork version of the plugin, essentially linking the new, secure versions to the old, insecure one.

On the one hand, we have developers going out of their way to help one another.  On the other hand, we have developers sitting on their laurels and mis-reporting information that ends up hurting one another.

The real problem here is one of a massive communication breakdown.


One of the first re-posters of the security notice has clarified things on his site to explain that the vulnerability is actually present in the old, abandoned pre-fork version of the plugin.

The original poster has re-scanned the updated version and verified that no vulnerabilities were detected in the current release. His notice remains on the site, though, as it is accurate for the pre-fork version that triggered it in the first place. However, anyone can contact him directly to verify that the current version is tested to be secure.

http://mindsharestrategy.com/2011/security-vulnerabilities/feed/ 1
Keeping it Realtime – Day 2 http://mindsharestrategy.com/2011/keeping-it-realtime-day-2/ http://mindsharestrategy.com/2011/keeping-it-realtime-day-2/#comments Tue, 08 Nov 2011 16:16:32 +0000 Eric http://mindsharestrategy.com/?p=3777 Read More]]> I will once again be liveblogging the Keeping it Realtime conference in Portland, Oregon.  If you want to catch up with yesterday’s stream, feel free.  Otherwise, stay tuned for more today!

You can also leave comments at the bottom of the feed …

4:56 pmNext on the schedule is Jack Moffitt on “Imagining the Future of Realtime” at 5:20 …

4:55 pmIs it true?  Do we really have a 25-minute break before the next session?  Time to stretch!

4:53 pmThe various frameworks are being built out of very different use cases.  So “real time” is more a family of ideas than it is a singular concept.

4:52 pmAre there any efforts at collaboration between the various frameworks?  Collaboration would help create standardization, but I agree that it might be a mistake.  Don’t start standardizing while you’re still experimenting with new approaches.  Get it all on the table first.

4:49 pmNode inspector lets you step through code running on the server inside a browser using Chrome’s JS debugger.  Awesome tool.

4:48 pmWhat are you using for debugging? “I’m using Visual Studio” … “I’m using console.log()” … yeah, I can vouch for VS being a far easier way to debug than printing to the log …

4:46 pmHaving a context and a stack trace is nice, I agree, but some times they don’t make much sense, either.  I have yet to see a stack trace for a multi-process/asynchronous system that makes sense …

4:43 pmSignalR is all about making “this kind” of programming available to a .Net stack.  It’s too important to not have anything.  I agree 100% and hope to see some options come out in the future.  I like the idea of SignalR so far, but I’d like to see other approaches at addressing the same problem, too.

4:42 pmMore realtime frameworks should interface in a polyglot kind of way.  And should add distributed/realtime debugging.

4:39 pmWhat is the biggest problem that your framework does not solve?  This is a very good question when people like me are trying to choose a framework to start with.

4:38 pmWhat technologies inspired your frameworks? Ruby programming … Jarvis from Iron Man …

4:36 pmUsing middleware to cancel DDoS attacks … like someone running an RPC command from the browser console a million times …

4:33 pmWhat is your framework’s security model?  ”None.”  That’s been the question no one’s asked yet this conference.  I know everyone’s focused on building the API right now, but “security’s not a big deal” isnot a good answer.  Security should be one of the first concerns, not an after-the-fact addition.

4:29 pmWe should be benchmarking the user experience.  That’s all that matters.  +1

4:27 pmOne thing that unites all of the guys using this realtime “stuff” right now is the use of Redis.  That’s true based on what I’ve seen so far … and I’ll be learning Redis over the next few days and weeks to make sure I’m ready for it!

4:26 pmA lot of the realtime standards/frameworks are still in their infancy.  This tells me far more about the industry as a whole than most of the other conversations.  Really, if you use any one of these frameworks you’ll need to be somewhat risk tolerant, because the API and standard will probably change tomorrow.  So use it, but be ready and willing to change and evolve your code and implementation as the system grows and matures.

4:24 pmWe might see more of an organic standards practice with regards to realtime.  This is similar to the way jQuery’s DOM selector pattern has influenced the modern web …

4:22 pmShould there be a standard way of “doing realtime?”  I can’t speak for the panelists, but I don’t think there should be.  The potential use cases are still being developed, so creating and sticking to a standard now would be a bit limiting.

4:17 pmOut of all of them, I’m the most excited about SignalR … if you haven’t heard of it, it’s freaking awesome!

4:17 pmWe’re discussing: SocketStream, Flowtype/NowJS, SignalR, dNode, hook.io, and Derby/Racer.

4:15 pmI once again have Internet access … but it may or may not last.  In any case, we’re now starting the panel comparing various realtime frameworks.

1:47 pmWatching so many command line demos has me really wanting a Mac or a Linux box … I’m installing Cygwin for now …

1:39 pmMessage queues can be implemented in MongoDB, but you have to poll … so it’s not a good solution.  Redis has Pub/Sub, PostgreSQL has listen/notify.  RabbitMQ and ZeroMQ are MQ’s … right?

1:33 pmMaking scalability more fun, less painful …

1:32 pmNext we get a “turbocharged toolkit for realtime in the Cloud” … yay for practical takeaways!

1:30 pmSorry, I missed the last half of that chat.  My office website crashed and I’ve spent the last few minutes debugging in terminal server …

1:07 pmThe browser doesn’t care which Node instance is handling things on the back end.  It gets all of its data through one pipe.  So don’t use one Node instance to handle two different data sources – like a chat feed and a Twitter feed.

1:05 pmDoes anyone here think that JavaScript is not fun?  Yeah … rhetorical question, but really … not fair to single out someone who dislikes your favorite system just because you have a microphone.

1:04 pmYou can exploit Node as a message service on the back end with a bunch of different workers.

12:59 pmDon’t re-invent the wheel.  Leverage the frameworks and systems that have already been built.

12:53 pmYou can’t just push TCP traffic out to the Internet, you need a gateway to transport it.  This is where Node and similar systems come in – they act as the gateway.

12:49 pmHTTP is wasteful.  Large headers for overhead, high latency with polling and new connections.  These aren’t issues with websockets.

12:46 pmWe’re all in agreement, websockets are the way to go!  The original system was designed for document pages … i.e. Mosaic.

12:45 pmNext presentation by Axel Kratel.

11:53 amTime for lunch! Let’s hope it’s a tastier fare than the carb-fest that was our breakfast … bread + french toast + oatmeal + cold cereal … we need some protein!

11:50 amWhy sticky sessions over pluggable persistence?  Because it’s meant to work with different development frameworks and languages.  You might do one thing if you’re using Redis and Node, you might do something else with a different system.  Specific decisions have been made in order to keep SockJS “polyglot.”

11:46 amSome of the complexity in this system is rooted in the fact that the browser is untrusted.  That’s an issue that comes up time and time again … and I have yet to see a reliable solution that doesn’t include just forcing more of the processing to the server side.

11:41 amGoogle App Engine, PubNub, and Pusher are all trying to do the same thing … providing a messaging API that’s simple to use.  But they’re all very complex under the hood.

11:39 amSockJS is only a transport layer.  If you want higher-level message abstractions, you should build them yourself.

11:37 amIf your server can handle sockets, but doesn’t have a SockJS library for it, you can deploy SockJS as a proxy … that’s interesting, but I don’t see the need.  Sounds like adding complexity for complexity’s sake …

11:34 amSockJS requires an asynchronous stack to work properly.  This makes perfect sense and speaks to a lot of the problems I had implementing websockets with Apache a few months ago.

11:33 amMost importantly, there is no Flash fallback … just native JavaScript.  This is the best part of SockJS, in my opinion.

11:32 amSockJS gives you abstractions that look like websockets but can use different underlying transports if needed.  It makes it easy to begin using websockets without needing to worry about the different implementations of the spec.

11:30 amChrome treats websockets as a binary thing and tries to push a lot of data through.  Firefox treats them like HTTP because they look like HTTP.  This makes using proxies difficult.

11:28 amIf you want to sum up SockJS in one sentence, you could call it a Socket.IO clone.  I guess that answers my question about the differences between the libraries …

11:27 amOoh, he’s giving away t-shirts for audience participation.  You can never have enough dev-related t-shirts.

11:27 am“WebSocket emulation kept simple, stupid.”

11:25 amNext up is Marek Majkowski talking about SockJS.  Hopefully this will explain how SockJS is different than Socket.IO (aside from some obvious differences in implementation that we discussed over breakfast).

11:25 amWhy didn’t they use websockets for the messages?  Because they needed to support multiple browsers and devices that don’t support websockets.  So instead they used a standard HTTP POST to send the data and long polling to update the vote tallies.

11:21 amAnd that’s the third person who’s alleged that this is a Node-centered conference.  Maybe we just need more asynchronous, event-driven, open-source servers …

11:20 amIs the underlying technology Node.JS since this is the Node.JS conference?  No … they’re using C.

11:17 amHow does data replication occur for multiple servers? As long as they’re all connected, they’ll all receive the message.  They’re basically all talking to one another all the time and sharing updates.

11:13 am10-15 languages and 10-15 frameworks … definitely a rich system.

11:12 amSpike allowed people to vote once every 5 seconds.  So apparently there is a bit of state information transferred in that massively distributed system.  They managed to handle 10s of thousands of concurrent users, too!

11:09 amDrumo is a realtime app on the web that’s similar to Quora.

11:07 amPubNub is “one step closer to the Singularity …”  This is an idea that came up yesterday and, oddly enough, the exact topic of the story I’m writing for NaNoWriMo.

11:05 amDoing this kind of project required a new type of architecture.  You can’t just take an existing LAMP stack application and add some realtime functionality to it.

11:02 amOnce again, Redis comes up as an efficient, quick data store … I will be adding Redis to my stack within the next few weeks …

11:01 amVotes came out at a rate of 350 votes/second and updated aggregate totals across all devices at a rate of 10 times/second.

11:01 amAfter 80k votes, there was only a 100-vote difference between zombies and vampires.  I’m less impressed with that stat as I am with the fact that they collected all of those votes in real time in less than a 5 minute period …

10:59 amTen times per second every device connected to the site updated the voting results.  Everyone saw realtime data in real time.  Awesome!

10:58 am(The Internet is a bit spotty today, so if I lose you, I apologize in advance.)

10:57 amReal-time voting with SpikeTV using The Deadliest Warrier as an example.  I love that show!

10:28 amHad to step out for a minute, what’d I miss?

10:22 amOK, that was cool.  Sending a message from ZeroMQ on the server side with python over a socket to NullMQ in the browser.  Great to have the demo pay off.

10:21 amOne of the slower live code demonstration sessions of the conference … the ones using prepared code seemed to go a bit smoother.

10:16 amCreating a request socket in one browser session and a listener in the other.  Nifty.

10:15 amAnother real-time demo of realtime.  Awesome!

10:14 amHmm … apparently our table just lost power.  Could be a bad sign …

10:11 amSTOMP is a simple protocol for PubSub connections. It looks a lot like HTTP, there are verbs, a header, and a body.  It’s simple and human-readable.

10:09 amNullMQ is meant to be an implementation of ZeroMQ for the browser.  It’s designed to extend features and the ZeroMQ API to the client-server nature of the web.

10:08 amZeroMQ isn’t in the browser … so there are some limitations to what you can do.  You’d have to write a plugin/extension to get it into the browser.

10:07 amI think I could integrate a Twilio-style realtime phone call support feature into my company’s client portal … add another idea to the never-ending to-do list I’ve created this week …

10:05 amTwilio has a product for making and receiving calls through the Internet using JavaScript.  Now that’s just cool!

10:04 amThe Paranoid Pirate Pattern?  Interesting development nomenclature …

9:59 amZeroMQ isn’t a server or a protocol, it’s a library that presents a socket-like messaging API over a variety of transports.

9:58 amThe context for this involves ZeroMQ, which is a transport layer I’ve never really heard of.  But from the docs, it acts as a concurrency framework.  Maybe this answers one of my earlier questions …

9:56 amNullMQ is so new, it started 3 weeks ago.  I love new tech!

9:55 amAnd now Jeff Lindsay will talk about NullMQ.

9:53 amNode is at version 0.6 but is getting pretty stable.  If you want to build a business on it, you might want to wait for 1.0.

9:47 amA one-line HTTP server in Node can proxy a request.  Awesome.  I need to start using Node.  Looking forward to the Node/IIS discussion later today!

9:45 am“clientRespsonse” … someone didn’t proofread their slides.  This is why I don’t use code in my presentations …

9:40 amI’ve never heard anyone pronounce “url” as a word before.  We all usually just spell it out.  Took me a second to figure out what he was talking about.

9:36 am“We need to steal the right ideas, not the popular ideas.”

9:33 amIf anyone in the world who wants to edit a specific Yammer page goes to the same Node process, itdoes make the application code easier.  But that kind of centralized computing comes with its own problems.  There’s a reason AWS and Google’s cloud are so popular …

9:31 amOne of the reasons dNode is great is because you don’t have to send all of your state arguments with the RPC request, you just leave them in the closure.

9:29 amPHP, Python, and Ruby all suck at concurrency.  So this makes me wonder if there’s a language/framework that excels at it …

9:28 am“Node.JS is starting to dominate this space.”  That’s seemed very obvious from this conference.  One of the guys I talked to today even suggested changing the conference to ‘Keeping Node Realtime’ based on the number of speakers referencing the project …

9:25 am“You might want to talk to Facebook if you want to talk to somebody’s mom …”

9:23 amAnother Rails presentation … I really need to learn Ruby …

9:22 am“The consumer is never the content creator.”  At least not until Web 2.0 changed the way the web worked …

9:22 amSo far I’m seeing a lot of logical progressions in how the server side of things changed – static pages to static pages served by Apache to database parsed by PHP and served by Apache.  But we never really changed the browser and are still stuck in the stone age when it comes to consuming the information stored in the DB.

9:20 amThe problem in the old days was getting really old browsers to use these static web pages … and in many circles, that’s still the problem.

9:16 amGetting started with the introduction talk by Mikeal Rogers.

8:41 amEveryone here seems to use Rails for just about everything.  Methinks it’s time to learn Ruby …

8:21 amGood morning from the breakfast table!
http://mindsharestrategy.com/2011/keeping-it-realtime-day-2/feed/ 4
Keeping it Realtime http://mindsharestrategy.com/2011/keeping-it-realtime/ http://mindsharestrategy.com/2011/keeping-it-realtime/#comments Mon, 07 Nov 2011 17:16:53 +0000 Eric http://mindsharestrategy.com/?p=3563 Read More]]> Today and tomorrow I’ll be at the Keeping it Realtime conference in Portland, learning about all the cool new interfaces available for a real-time web.  Unfortunately, I wasn’t able to finish my liveblogging plugin before today … so you’ll be stuck hitting F5 repeatedly to get update from me in this space.  On the other hand, this will serve as a real-world demonstration of why the non-real-time web is so ineffective for real-time communications.

Maybe we’ll both learn something! :-)

6:00 pmWell that just about does it for the day.  Time to go home and recharge the batteries for tomorrow’s sessions.  Have a great night!

5:55 pmIronically, it’s usually consumption that outpaces production … but that seems to have reversed in the Internet age at least.

5:54 pmIf the web is growing faster than the leading search indexer (Google) can index it, what does that mean for the future of data? The real-time web will only make the content on the web grow faster.  Can consumption keep up?

5:49 pmHow contextual should push notifications be?  The guy asking the question makes a good point.  I don’t care about every new follower I get on Twitter, but there are some followers I do care about – like when a celebrity or someone with a larger network influence starts paying attention.  I don’t need push notifications every time a dollar or two comes in or out of my bank account, but if there’s a huge, atypical withdrawal (or deposit?) I want to know pretty quickly.  Context is what frames the value of the immediate notification.

5:46 pmIf you want people to find your app using Google, then (right now, at least) you almost need to build two versions of your app – one for the user, one in HTML for the crawlers.  I thought that was the point of the hash-bang API Google and Twitter have been experimenting with.  It at least warrants some further research.

5:42 pmIf I need to go to your app to get value out of it, there’s a problem.  I’m busy enough, the content and value should come to me.  The real-time web isn’t so much about making the web faster, it’s about making it more convenient.  Updates and info come to me as I need them, they don’t sit on some static site and wait for me to come looking for the update.

5:40 pmPanel discussions work best with 3 experts.  Maybe 2, but no more than 4.  The moderator just summed up the 8-person panel perfectly: “We’ve had 8 people talk about themselves and the future now for 20 minutes …”  Makes it hard to get into real content and discussions with a 40-minute time limit.

5:34 pm“Some apps are going to be real-time and it’s going to freak people out.”  I definitely agree with that.

5:31 pm“Communicating with super-low latencies will give us some really cool apps.”  I agree, but a lot of that still depends on technology.  A lot of people still don’t have high-speed Internet, smart phones, or computers capable of doing even half of what we’re doing …

5:28 pm“We go to a conference to meet people.”  I like the idea … but really, this conference has had such a tight schedule I haven’t had the time to meet many people.  And the few I’ve had the chance to meet were kind of floating around in a clique … I got first names, quick handshakes, then they started talking about events from the previous weeks and started catching up on “in” things from their group.  Not so much “meet people” there … so sorry, I disagree.

5:24 pmBankSimple will eventually be exposing a developer API?  Awesome!

5:23 pmThe real time web, in the banking world at least, has been around for a long time on the side of the banks.  But the future of the real time web is putting it into the hands of the consumer.  Thanks for the innovations, Alex Payne!

5:19 pmEight panelists … I definitely do want to know what they all do … but with only 40 minutes for the panel I think these introductions will take way too long.  Be brief people, please.

5:18 pmAnd now, the panel begins.

5:01 pmToday’s final panel will feature Mikeal Rogers, Alex Payne, Leah Culver, Julien Genestoux, Nathan Fritz, Jack Moffitt, Jeff Lindsay, and Chris Blizzard.  Should be informative, useful, and educational!

4:56 pm10-15 minute break, then the panel I’ve been waiting for all day!  Time to stretch …

4:55 pm… and if anyone builds something based on that idea, I want a copy.  And a short “inspired by” attribution buried somewhere in a readme … nothing big, just something I can show off while waiting for the race to start …

4:54 pmI definitely think a geolocation feature for a marathon would be a good seller.  ”Track me as I run this race.”  I’d love to post an interactive map of a race, let people see where I am at any point in time, link together with a few web cams so they can watch me as I pass through check points.  That’d be very cool.

4:52 pmA geolocation game played with cars … hmm …

4:51 pmSo what’s the difference between a geolocation game and an augmented reality game?  I think it’s all in the UI, and that’s why it’s effective.  With a geo game, you use the real world as the UI and the phone/device is just an added input.  With AR, you use the phone/device as the UI and the real world is an added input.  It’s all about affecting common behavior.  If AR really wants to take off, I think it needs to start in geolocation and evolve with the technology.

4:46 pmA physical drive with 6 GB/s read and 4.4 GB/s write speeds?  Holy crap.  Even at 0k for a drive, that’s really incredible.  I don’t think I’ve even touched something that needs that kind of speed.

4:44 pmI like the slide title: “Examples of doing it wrong.”  Ironically, just about everything on this list came up last week when I had a performance problem with my app.  I’m glad I was able to solve it without falling into any of these traps.

4:42 pmThe most relevant data should stay outside of the slowest point of the application.  If MySQL (or the persistent store) is the slowest point, then cache data in memory rather than reading/writing from the database frequently.

4:41 pmWow … that was just the overview?  Intense …

4:40 pmThe “Reactor pattern” is a way to resolve the blocking IO (scalability) issue, and most programming languages have one.

4:39 pm“Ruby doesn’t scale well” is a myth.  The scalability issue is actually common to many programming languages, not just Ruby.

4:38 pmThe trick to async with MySQL is message queues … put that on my to-do list for research, since I’m not 100% sure how to accomplish that.

4:37 pmThe basic stack includes Redis, Node.JS, Socket.IO, and the web application itself.  He’s going to give us more details … and now I understand Redis a bit more.  I think a Pub/Sub feature for a key/value store is really powerful and will probably use it to power the subscription engine of SwiftStream as I continue to build it out …

4:33 pmThey originally built the system using Apache and discovered the hard way that it wasn’t asynchronous when 7,000 score updates were queued on the server and ate a lot of RAM.  I’m so glad people warned me about that before I started doing async work … yay for Nginx!

4:32 pmThe video of the graphics and pings on the map was pretty cool.

4:31 pmGeoLoqi sounds like a fun team to work with.  Having your job be making a game as an experiment just to see if it works?  Awesome!

4:28 pmKyle Drake and Building MapAttack.  I know web-based games are pretty cool, but a web-based game using real-world geolocation … it’s very similar to the augmented reality stuff my friends were trying to get me into last year.  Should be exciting!

4:26 pmIronically, I just saw a presentation on Thoonk and when I tried to load their website, it crashed because the servers are over capacity.  Maybe it wouldn’t be the best choice for a scalable web app?  Just kidding, I’ll still give it a look …

4:24 pmSome of the resources Henrik exposed us to: &! (his app), Thoonk.js, Backbone.js, and Capsule.js.  I still have no idea what Thoonk and Capsule do, but I’ll be looking into them.  And into Redis.  They could all be hugely useful.

4:20 pmThose last pieces of advice were courtesy of Nathan Fritz, who will be speaking tomorrow.

4:19 pm“If your app fits Redis, use it.  If it only kind of fits, don’t touch it.” – Great advice and reminds you to always choose the right tool for the job.

4:18 pmRedis can keep the entire app in RAM for 10s of thousands of users.  That alone is impressive.

4:17 pmWhy Redis?  Basically because it’s “fast as hell” … or because it’s a “honey badger” … um … sure …

4:14 pmWow … this session is very much like drinking from a firehose.  Capsule.JS is the latest framework we’ve been told about, and I still barely understand what any of these systems do.  Kind of like a here-are-50-tools-pick-one presentation.  I’m not knocking it at all, it’s just a lot of information to absorb and retain.  I’m sure I’m forgetting far more than I’m retaining right now.

4:10 pmRedis, an open source key/value store, is a great way to share memory between processes and languages.  It’s pretty scalable, too.

4:09 pmSharing data models between the client and the server is great for quick prototyping, but sharing that memory state isn’t very secure or scalable.

4:08 pmCase Study #3: andbang (&!), which just launched today!

4:05 pmCase study 2: Recon Dynamics.

4:04 pmParsing XMPP in the browser is a pain.  In my experience, parsing just about anything in the browser is a pain, so you want to be sure the data is ready before you send it.  Particularly if you’re concerned about cross-browser performance (I’m looking at you IE) …

4:01 pmIck.  It’s built in Django.  I’m not really a fan of using Python and Python-related frameworks for web apps … don’t ask me for a solid argument why, just a lot of bad experiences …

4:00 pmFirst case study: Frontdesk.im

3:59 pmReal-time, real-life Pac-man after the conference?  Cool!

3:51 pmThe next session, presented by Henrik Joreteg, is about building 3 single-page apps 6 different ways.  When I first read that description, it sounded very much like reinventing the wheel … but I’m intrigued nonetheless.

3:45 pmA lot of systems are turning things off by default now … is that the right plan of action?  I don’t necessarily think so.  A lot of users don’t understand how to turn things back on, so disabling features by default in favor of security is crippling users in my opinion.

3:41 pmSo where are websockets today in terms of security?  Adam feels comfortable with them, but doesn’t consider himself a websocket expert.

3:32 pmWho’s responsibility is it to do secure code?  Is it a requirement of the framework/language?  Or of the developer using the tools?

3:28 pm“If you happened to fall asleep …”  Sorry, guilty.  But that was a really technical discussion of the issues in the community with a very limited introduction and not much room to breathe.

3:21 pmAdam’s goals for the community:

  • Secure by default
  • Better examples – documentation that doesn’t suck
3:17 pmThe challenge is that we have a lot of developers who don’t really understand a lot of the security concerns that come along with development on the client.  They’re typically server-side developers who are now writing libraries for use on the client side … but they haven’t been coming from the client perspective.

3:15 pm“I might make a few people upset by this talk” … Now I really want to know why.

3:14 pm“Old Problems, New Tools” – Adam Baldwin

3:08 pmNext up, a presentation from Adam Baldwin, the co-founder of nGenuity.

3:07 pmIs there binary support for websockets yet?  It used to be stream based, now it’s packet based … but the best answer is “I think so?”

2:56 pmDebugging long-lived applications in the browser?  Mozilla is building out a pretty advanced memory management tool.  They’re breaking out DOM, content, layout, style, etc.

2:52 pmApplications don’t need to live in the cloud, they can live in the browser and interact with other browsers, the cloud, or other applications.

2:51 pmIt also allows for direct peer-to-peer data transfer.  That’s a lot better than routing data through a 3rd party server.  A peer-to-peer connection would be faster and, from a privacy standpoint, a bit more secure than a peer-to-server-to-peer connection.

2:51 pmWebRTC is a direct audio/video connection between browsers.  It’s not run through a 3rd party server to reduce latency.  I think it’s a fantastic idea, and what I referred to at one point as “Skype in the browser.”  Apparently Mozilla and Google are collaborating on it.  Awesome!

2:49 pmExposing device APIs and providing access to lower-level functionality of the machine to the browser and the browser’s applications are important.

2:47 pmApplications and web pages are different things.  You install an application … and the mental associations that come along with that instill a sense of ownership.  When you use a web page, you merely visit the web page.  It’s easier to offer subscriptions and have a pricing model for an installable application than a website, even if they run on the same platform and present the same experience.

2:46 pmYou can build web-based applications that run in the browser but which aren’t server-based applications.  This is a hugely powerful concept.

2:44 pmGoogle as an alien face-sucking monster … interesting analogy …

2:44 pm“Data in the cloud is the new proprietary source code.”  Data is being locked in because it’s stored on a proprietary system.

2:42 pmWebsockets are nice because there’s not much overhead added to the request.

2:42 pmHTTP has been evolving from long, static requests, to asynchronous requests for chunks of HTML via XHR (XMLHttpRequest), to AJAX long polling, to websockets.

2:39 pmThere are technologies that we’re going to be building into browsers that will change the way the world builds web applications.

2:37 pmConsidering the cool stuff Mozilla has been doing with Firefox lately, this should be a pretty powerful presentation.

2:37 pmNext up, Christopher Blizzard from Mozilla … talking about “Real Time in the Browser”

2:35 pmNo idea what the next presentations are about … but I’ll be sticking in Track A for the next few speakers as well.  Their resumes are compelling enough to promise something interesting, so I thought I’d gamble and stick it out here.

2:25 pmAnd now I hear about JSON-RPC … I need to see how that’s different from dNode, since that itself seemed very much like a JSON-powered RPC system.

2:24 pm“Futon” is the event service bus for hook.io and CouchDB.  I think this particular community needs some help naming their projects …

2:22 pm“That totally should’ve worked.”  No joy … here, though.  Sad.  1 more demo to hopefully finish things up.

2:21 pmAm I the only one seeing the freaking awesome implications of a system like this?  Twitter, IRC, a browser chat … all talking to one another and broadcasting messages.  One point of entry, one API, over 40 different application hooks to broadcast and transport messages.  Incredible!

2:20 pmWe’ve now established bi-directional communication between the browser and IRC … next we’re adding Twitter.

2:19 pmOK, the browser just piped a message through hook.io into the IRC chat room … awesome stuff!

2:18 pmNow we’re listening to IRC messages, too.

2:17 pmAll of hook.io is in active development, and hookJS was just introduced a couple of months ago.

2:16 pmCrap … I tweeted and get to be the “first victim” in the presentation.

2:15 pmAwesome captcha … that looks like nothing you can possibly type.

2:15 pmSetting up hooks to listen to Twitter and IRC at the same time.  That’s freaking cool.

2:13 pmNext demo – setting up a quick RSS feed server …

2:11 pm“Sorry, bear with me for just one moment … ”  Methinks we failed with the third goal of the presentation …

2:10 pmOK, I definitely will need to build something with hook.io.  This is nifty stuff.

2:08 pm(My host might reboot my server in a few minutes … so if I disappear, I’ll be right back …)

2:04 pmGoal for the live coding demos – build an application, process multiple data streams, don’t fail.

2:01 pmIPO – Input, Process, Output.  Building on this model, you can have a lot of actors that make up an application that is greater than the sum of all its parts.

2:00 pmSadly, I’m surrounded by Mac users.  Seriously, I can see 16 different laptops from my seat and every single one is a Mac!!!

1:59 pmApparently I’m not geeky enough … he instructed us to curl an address and I immediately went to my browser because I don’t know a better way to do it on Windows :-(

1:55 pmComing up next, Marak Squires and hook.io in Track A …

1:54 pmI have about 50 different ideas in my head right now … and not enough time to do more than 2 or 3 of them.  Maybe I should just spec out some rough details and sell the concepts to the highest bidder …

1:46 pmdNode is the same API for client-server and server-server.  And the protocol is entirely abstracted away so you can use the protocol without using the dNode library at all.  That’s the biggest difference between dNode and nowJS.

1:45 pm“How do I handle getting and setting closure variables?  I don’t …”

1:45 pmTestling - really simple JavaScript unit tests that will run in all browsers.

1:44 pmBouncy queries dNode using Socket.IO to route requests from one server to another and act as an on-demand load balancer … that was actually a pretty cool demo.  Simple, easy to use, but I think it’s insanely powerful.

1:41 pmIt definitely feels like “shared memory through communication” from that earlier presentation.  You don’t have to duplicate functionality in different locations so long as you communicate enough to expose and support that functionality in those different locations.

1:41 pmWhere dNode shines is in its functionality and its ability to expose functionality you’ve already written somewhere else.

1:40 pm“Most of you are from the Bay area.”  I feel special … I’m not :-)

1:39 pmCalling a remote process lists a bunch of data and a method name … I think an improvement would be to also list out parameters for the method.  Kind of like a WSDL for a SOAP call.  Knowing that the bart system exposes a departures() method doesn’t help me if I don’t know what parameters the method requires/accepts.

1:36 pmThere are PHP, Java, Ruby, and Node.JS adapters for dNode.  I wonder how hard it would be to write a .Net adapter.  JS-based RPC would be huge for my .Net MVC projects.

1:35 pmSo long as you use callbacks, it’s pretty much all going to work.

1:34 pmdNode calls functions using an RPC pattern.  This is quite different from most of the REST-based web communications I’ve done so far.  Particularly since it’s an RPC call written in JavaScript that uses JSON as a transport rather than XML.

1:33 pmdNode is like a newline-delimited JSON.

1:32 pmdNode isn’t just for Node.js and client side JavaScript.  There are some adapters for other languages as well.

1:31 pmWatching someone live code during a presentation makes me feel like a crappy developer …

1:29 pmYou have to be really comfortable with callbacks … but if you’ve been doing any JavaScript, you already are.

1:28 pmOoh … live coding demo. Using vi.  And Node.JS.  Awesome.

1:27 pmYou don’t have to build a routing table or marshal around this stream just to use the callback.

1:26 pmThe cartoon crocodile can zig() …

1:25 pmdNode makes it easy to sychronize a flow in realtime.

1:25 pmOne of my coworkers wants to learn to be a “hacker” … I don’t think he knows what that means, but it’s entertaining.  I should have brought him with me!

1:17 pmI’ll be sitting in Track A after this – first up in 10 minutes is James Halliday discussing dNode.

1:15 pmThere’s a lot of test coverage with Derby’s examples and a “pretty cool test suite around what Racer does.”

1:15 pmThey Derby framework will be used to build a lot of apps, will have changes to support ACL and authentication, and will provide long-term support.

1:09 pmDerby isn’t trying to answer the “how do you scale to a million user” question.  Their first focus is on “how can anyone build a realtime app quickly and easily.”  It’s all about getting the API defined first, then they’ll focus on scalability.

1:07 pmDerby can drop in realtime interaction to any web app.  Check out a pre-beta demo athttp://derbyjs.com.

1:06 pmDemoing a realtime chat app in a room full of geeks with laptops … that was interesting.

1:05 pmThe goal of Derby is to provide a way for every developer to build applications that are fully realtime and fully multi-user.

1:03 pmBut application schema and data schema aren’t quite the same thing.  That’s actually a great innovation and one I’ve already used in a couple of projects.

1:02 pmData will automatically sync to your database, not just between your client models and your server models.

12:59 pmIn the error handler, you can re-try the same action that just failed.  I can see how this will help circumvent race conditions by just reapplying changes … but it seems a bit inefficient to just retry upon failure.

12:57 pmRace conditions and conflict resolution.  Once you have a lot of concurrent connections, you can run into a lot of them.  Makes synchronicity difficult.

12:54 pmI like how the underscore-signifies-privacy convention has now found its way into HTML rendering …

12:52 pmOoh … “slightly more complicated example”

12:51 pmNo, I get it.  The client side MVC framework is in JS … so writing the code once for the client means you can re-use the code for a server-side Node.JS setup.  Not bad, but it would mean migrating a lot of my existing .Net work if I want to take advantage of the write one, use anywhere paradigm.

12:50 pm“Write your routes once and they work on both the client and the server.” … so write them in which MVC framework?  The PHP frameworks and .Net frameworks are different enough that I’m concerned here …

12:47 pmI like the way the Derby markup looks … but now I’m wondering which IDEs support it.  I must say, Visual Studio Intellisense has me a bit spoiled when I start looking at web markup and IDEs.

12:46 pmIf any of your users don’t have Internet access but have your page loaded, they can still interact with the application because it shares a lot of the server code with the client already.

12:43 pm

A to-do application demo in Derby.
12:42 pmViews, models, and routing.  Nothing new.  Except it’s entirely asynchronous and can connect an input field on one machine in one browser to a display field in another for another user entirely.

12:41 pmDerby is all JavaScript … I’m quite excited about that.  It means I can use it in my existing .Net projects as well as my open source PHP projects!

12:39 pmDerby is built around realtime and has a component called Racer that works as a realtime data synchronization engine with Node.js.

12:38 pmHe’s describing the disconnect between MVC on the server and “really complicated” jQuery on the client side.  Sadly, he’s describing the exact problem I was fighting through all of last week …

12:37 pmThe after lunch talk is starting.  Introducing Derby.  It’s a new MVC framework that makes building realtime apps “easy.”  I’m every excited about this one!

12:19 pmI think I need to migrate over to the Track A room for the next several sessions … anyone near an open power plug?

11:48 amAnd now it’s time for lunch … yay for food! Feel free to track me down at some point and say hi!

11:48 amI tried using Growl on my iPod when it first came out. Haven’t touched it again since. I really thought it had disappeared entirely until Adam mentioned it just now in his chat.

11:44 amAnd now that I’ve compared realtime communication and push notifications to Flash … please don’t shoot me.

11:43 amUse push to enhance your application.  Give your users options and don’t let the technology get in the way of the experience.  Reminds me of the advice we’ve been giving developers regarding Flash for years.

11:39 amUrban Airship has their own push transport … called Helium. This warrants some looking in to …

11:37 amBy the way, I’m working with my VPS host at the moment to correct the 8-minute timestamp issue on these posts.  Should be resolved within the next hour or so.  In other news, AtumIT rocks!

11:31 amReminds me of the disconnect between a sandbox PayPal account and a production one … don’t know things will fail until they do :-)

11:31 am“99% of the problems using push come from a disconnect between development and production.”

11:30 amFrom the looks of things, Apple is a great way to learn sockets and push communication.  Too bad so much of the code examples are written in Objective-C.

11:26 amYou can request all three of those permissions or just a smaller subset of them.  Honestly, even all three isn’t that much … unless Apple expects to extend more permissions to push-enabled applications, I don’t understand why they’d make it that granular in the first place.

11:23 amOn iOS, applications can’t run in the background, so there are just a handful of things you can do: display an alert, add a badge (like an unread count in mail), and play a sound.

11:21 amPush using iOS and Apple for history and background…

11:17 amNow it’s time for Adam Lowry and “Connecting the Disconnected” …

11:10 amAn attendee was just asked for his opinion on an issue … “I blogged about it, so you can go read about it.”  Um … we don’t know who you are, buddy.

11:09 amIf storing state in a distributed system is a mess … why do we bother making a stateful system in the first place?

11:05 amFor those of you paying attention … the time tags on my updates are actually 8 minutes behind.  Not because I’m a slow typer, but because my VPS’ internal clock is off.

11:04 amEven though ZeroMQ can abstract a lot for you, it’s still too low-level most of the time.

11:02 amZeroMQ is a socket abstraction layer for messages rather than bytes.

10:58 am“Don’t communicate by sharing memory; share memory by communicating.”

10:57 amThe trick is being able to answer, “what’s happening now” not answering “what happened 5 minutes ago?”

10:55 amIt seems to me that there’s a disconnect at the data level.  On the one hand, we need to record data quickly to capture all of the real time events that occur in a system.  But the discrete events aren’t what’s interesting … its the aggregation of that data that’s interesting.  So at the DB level we’re running into a speed issue – speed of recording events and speed of processing complex queries over those events.

10:52 amIn a realtime system, you have very simple bits of state, but the simple systems are more about answering questions regarding multiple users.  What are the trends across the system?

10:50 amThe tricky part with building a distributed system is controlling the multiple points of failure.

10:44 amAnd now a quick test post to check that polling refreshes are working …

10:42 amTesting the AJAX polling system to make sure it’s working properly …

10:49am – Installing a quicky AJAX polling system so you don’t have to refresh any more …

10:42am – 10 minute break until the next session … Time to breathe for a minute …

10:39am – Having multiple sharing/interaction features on a site is ineffective.  The user is one person, so breaking apart Facebook and Twitter and Tumbler and … sucks.  There are ways to communicate between the two – Khris is recommending we all take a look at Backplane.

10:37am – Realtime is about different aspects of the web.  Creating data, delivering data, storing data.  But another great focus is processing the data to extract meaningful, marketable information.

10:31am – Great live example of a realtime interaction that isn’t a status update stream is the “Trend Watch” on reuters.com.

10:29am – Realtime is somehow associated to a list of updates presented in reverse chronological order.  But that’s not all that it is.  We need to break out of that stereotype.  I agree 100%; there’s far more data that can be presented in realtime than just status updates.  It’s just a matter of providing value so that you’re not just collecting data for data’s sake.

10:28am – “Once you lock in to one of these units, you can just rip through the rest of the industry.”

10:26am – Realtime on NBCs website makes keeping your phone nearby while watching TV essential to the experience.  If you’re not watching the show and engaged with the site … you’re missing something.  Fantastic use of technology!

10:24am – A concept called “push to air” allowed publishers to quickly push content and comments from a live Twitter/Facebook/Forum feed directly to an on-air TV show.  This definitely makes it compelling for people to go to the site and interact with the feed.  ”Hey, I might get on TV!”

10:20am – Print publishers and magazines can do interesting things, but they’re all going out of business and disappearing …

10:16am – Hearing about a simple product “anyone in this room could build in an afternoon,” realizing that I could build it in an afternoon, and hearing how much money it was sold for … I’m frustrated I wasn’t there first, but excited that I’m still there in the first handful of people in this industry.  Definitely a lot of financial potential here.

10:14am – To stay relevant, old time publishers like the Washington Post are going to need to transition away from a static, “crap” experience and towards a realtime one.  The new experience these days is Facebook and Twitter.  It’s live and realtime.  Who wants to go back to an old, static experience after that?  I can definitely relate … I get more news from Twitter than I do CNN …

10:13am – “The transition from the static web to the real time web isn’t just cool and exciting.  There’s a lot of money there!”

10:11am – “Twitter is doing 230 million checkins a day … and we’ll look back at that later and laugh and think it was a toy.”

10:09am – Look for the products and innovations that should be built a year from now or 5 years from now. If you focus on what needs to be built now, you’ve already missed the boat.

10:08am – Society tells you that you can’t predict the future. We think that’s crap.

10:05am – “The transition from the static web to the realtime web is as important as the transition from the quill to the printing press.”

10:02am – A “nontechnical” presentation at a tech-centered conference? Hmm …

9:54am – In the mean time, I’m wondering why so many “real time” applications (like the aforementioned Google Reader demo) are only realtime on the server side and not on the client side.  It would be huge if my feed could update in realtime. Speaking as a publisher, it would be awesome if I could update my readers in realtime as well.  I think there might be a definite use of adding a meta tag to the headers of my documents to link to a realtime hub.

It’s just a question of convincing more people to update content for the client in realtime once I start pushing content to aggregators in realtime.

9:53am – Awesome presentation with some live demos.  Next up is Khris Loux talking about realtime and revenue.  I’ll be taking a copious amount of notes.

9:50am – The Google Reader demo was a server-to-server interaction … not a server-to-client interaction.  So while your feed would be updated on Google’s system, you wouldn’t see an update until you click the Reload button …

9:45am – Greetings to all of you reading this site in real-time.  Google Analytics tells me there are 6 of you at the moment.  Realtime web in action! :-)

9:43am – Apparently Tumblr uses a hub to push feed content out to subscribers in real time.  I’m now wondering why WordPress doesn’t use a similar setup … and just finally discovered a potential business use for SwiftStream at the same time …

9:40am – We use there parties: the publisher who has the data, the subscriber who wants the data, and the hub that routes data between the other two.

9:39am – The only widely-used protocol on the web is HTTP, even though there are better protocols out there.  So to make a realtime web on a large scale, we’ll need to use what’s already available and ubiquitous.

9:33am – A clock is a real-time realtime example.  You could poll it … wake up every minute and see if it’s time to get up, or just wait for the alarm to go off instead.  I’d rather wait for the alarm.

9:32am – Realtime doesn’t mean it has to be now … realtime can be really slow.

9:30am – “The key to real time is to be like the kid in the backseat asking ‘are we there yet are we there yet are we there yet …’”  I’m impressed, Julien must read my blog.

9:27am – Making the web real-time versus making a specific website real-time involves making services and web servers push content back and forth in real time.

9:25am – First session is a little late, but looks to be pretty good regardless.  I heard Julien talking up his presentation during breakfast, so I’m looking forward to it.  Now that they’ve gotten the microphone working, that is …

9:15am – The conference is now open and there are a lot of people here.

http://mindsharestrategy.com/2011/keeping-it-realtime/feed/ 1