Quantcast
Channel: Gun.io
Viewing all 592 articles
Browse latest View live

Best Free WordPress Plugins

$
0
0

I don't use WordPress myself any more, but many of my clients do, so here are some of the best Free and Open Source WordPress plugins that I use and recommend. All of these are free WordPress plugins with downloads and source code available.

Best Free WordPress Plugins
W3 Total Cache

W3 Total Cache

W3 Total Cache is the best caching plugin for WordPress. Once you install and configure it, and you'll see a vast improvement in the load times of your page. It features automatic caching, minification of assets, progressive loading and CDN integration if you're ready for that.

Home page and source code.

Best Free WordPress Plugins Optimize
WP-Optimize

WP-Optimize

WP-Optimize is a little plugin for optimizing your database, meaning that responses are served faster. When coupled with caching, your site will be blazing fast.

Home page and source code.

Best Free WordPress Plugins SEO
WordPress SEO

WordPress SEO

Here's another great free WordPress plugin!

Wordpress-SEO is an extremely comprehensive free Search Engine Optimization plugin for WordPress. It has a very broad range of features, from basic things like title and slug optimization to more advanced features like authorship relation and automatic nofollowing. It's sweet! Try out this great wordpress plugin for free and you'll be ranking high on Google in no time.

Home page and download.

Best Free WordPress Plugins Antispam
Akismet: WordPress Anti Spam

Akismet

I'd be surprised if you hadn't heard of this one, but Akismet is the best free anti-spam WordPress plugin. It stops your comments section from being spammed with links to offshore pharmacies and other unscrupulous businesses.

Home page and source code.

Best Free WordPress Plugins Greet Box
WP Greet Box

WP Greet Box

WP Greet Box is a fun plugin which lets you display custom messages to inbound visitors based on where they came from. This is a must-have free WordPress plugin if you generate a lot of traffic through social media, as you can tailor your greeting to specific types of users. Simple, and very cool!

Home page and source code.

Conclusion

Hopefully, these plugins will have your WordPress site loading faster, ranking higher in search, and engaging better with your visitors!

What do you think? Are there any great free WordPress plugins that I should be listing here? Leave a comment below!


Passing Arguments To Embedded JavaScript

$
0
0

So, you're writing an embedded script and you want to pass some arbitrary arguments and parameters to your javascript. How do you do it? This quick little tutorial will show you two examples, one using the HTML5 data attribute, and the other using the query string.

Method 1: HTML5 'data' Attribute

This is the easiest method. HTML5 now supports arbitrary parameters as long as they are prefixed with 'data-', so you can pass any arguments to your function with that.

So, your embed will look like this:

<script id="searcher" data-search="bananas"  
        src="http://yoursite.io/searcher.js" >

Then, inside your javascript:

      var script_tag = document.getElementById('searcher')
      var search_term = script_tag.getAttribute("data-search");

And that's it! Easy peasy.

The downside of this is that some very very old browsers might not like your arbitrary HTML5 attributes. In that case, you can use the querystring approach.

Method 2: Use the QueryString

In this case, you're just going to pass your parameters to the javascript as part of the query string, just like on any other GET request.

So, your embedded script tag will look like this: So, your embed will look like this:

<script id="searcher" 
    src="http://yoursite.io/searcher.js?search=bananas">

Then, your JavaScript will look like this:

var script_tag = document.getElementById('searcher');
var query = script_tag.src.replace(/^[^\?]+\??/,''); 
 // Parse the querystring into arguments and parameters
 var vars = query.split("&");
 var args = {};
 for (var i=0; i<vars.length; i++) {
     var pair = vars[i].split("=");
     // decodeURI doesn't expand "+" to a space.
     args[pair[0]] = decodeURI(pair[1]).replace(/\+/g, ' ');   
 }
var search_term = args['search'];

And that's it! It's slightly uglier than the previous method, but it'll work on older browsers. Probably.

Conclusion

And that's it! Did this work for you? Please leave any questions and comments below!

Script Tags, Inbound Links and SEO

$
0
0

Suppose I have a popular script on my website that many people include as a script to add a feature to their sites.

Does that script tag contribute to the PageRank and overall SEO of the domain hosting my script? Or does that only apply to anchor tags?

I've searched online and read through all of Google's documentation but can't find an answer.

I can't find an answer, I've looked all over the internet. Hopefully, some good Samaritan will fine this post and leave an answer in the comment below!

Do you know? :)

Storing Multi-Line Strings in JSON

$
0
0

JSON is an extremely rigid schema. It's great, but it has a couple of shortcomings, the largest of which is the inability to store multi-line strings. This can be particularly annoying for storing things like structured text and public keys in JSON for later interaction with JavaScript code in the browser or on the server in Node.

Fortunately, there is a quick and hacky solution!

Example

Imagine that we have a list of servers and their associated public keys. To store that data as JSON, we could store it as an array of comma-separated string objects, which might look something like this:

You can verify that this valid JSON with JSONLint.

Then, to retrieve our text block, we just have to join them back together on the newline characters, which we can do like this:

And that's it! Now you can store multiline strings in JSON. It's not the prettiest thing, but it is the simplest given the constraints of JSON.

Did this help, or have you found a better way of doing this? Leave a comment below!

How to Search Git Logs

$
0
0

So, you want to search your git commit logs. Good news! It's really easy, and I'm going to show you how to do it.

If you use git, you're probably already familiar with the classic UNIX string searching tool, 'grep.' (If you already know about grep but want something better, you should try out Ack, but that's an unrelated issue.)

Lucky for you, git has a 'grep' mode!

Suppose I wanted to search for the string "facebook" in all of the commit messages of my project. All it takes is this one command:

git log --grep="facebook"

and you'll see all of the log messages which contain the our search term!

Okay, now suppose that you want to search more than just the commit messages, but the commits themselves. Simple! Drop the 'log' argument.

git grep "facebook" $(git rev-list --all)

This will show the commit ID and line number of all instances of your string. (The string can also be a regular expression, which is handy if you know regex, too.)

This can be made slightly more human-friendly by using the 'pickaxe' feature of git log grep, like this:

git log -S"facebook"

which will show all the commit messages of commits which include your string.

Hope this helped!

This Week So Far

$
0
0


My life for the past few days.
.

I think I need to go on holiday.

Doing it Wrong: What Real Hacker Hostels Would Look Like

$
0
0

There's an NY Times article being passed around about a small string of "Hacker Hostels" in Silicon Valley.

Conceptually, it's an awesome idea. Gun.io was founded in order to provide an economic support structure for those interested in the nomadic hacker lifestyle - hacking and adventuring! Which brings me to my main question: of all places, why the fuck are they building hacker hostels in Silicon Valley?


Why would I want to go to Silicon Valley when I could go skateboarding in a fountain in Shanghai?


I'm probably going to catch a lot of flames for this, but here's my honest opinion: Silicon Valley is a hellhole.

For those who haven't actually been but read tech news, Silicon Valley might seem like a tech Mecca, where developers are nurtured and ideas flourish. The reality is quite different. The valley is a giant corporate parking lot full of overpriced town houses, where programmers are exploited and greasy snake oil salesmen are giving the orders. The 'big ideas' are usually trivial and derivative. Everybody is chasing the dream of a startup jackpot, that a large corporation will make them instantly rich if they just keep A/B testing enough. Nobody seems interested in working on problems which solve anything more the minor inconvenience or providing mild entertainment. The nerd-entitlement is palpable. It's isn't a fun, interesting or inspired place. It pretty much sucks.

Ranting aside, the idea of hacker-cohabitation is fantastic. I live withhackers and I love it. We push each other to work harder, we can get feedback from each other, and we also have a ton of fun. (Lately, this has involved a lot of Tremulous LAN'ing.)

We've traveled together as well, and I think we all share the same idea about why we're in the game at all. We're not hacking so that one day we can buy nice houses in Menlo Park, spending our days 'mentoring' the next stupid startup and generally feeling smug about ourselves - we're hacking so we can buy plane tickets and bed space in far away places, to share in other sights, smells, tastes and ideas. The ideas and perspective gained from immersion in another culture are what takes a project to a higher level. I think that's why the Valley is so saturated with trivial ideas - it's a static monoculture with no real challenges left.

So I don't want to go to Sillicon Valley - I want to go to Quito. I want to go Jakarta. I want to go to Shanghai. I want to go to Berlin. I want to arrive in a new city and pay a reasonable rate for a place where I know I can have a decent internet connection, a desk, maybe a bike to get around, a place to lay my head and some cool hackers to grab beers with.. and I want to do it again in a new city two weeks later.

"AirBnb for Hackers" has been in the Gun.io skunkworks for a long time, and it's certainly still not ready to go live, but I felt like using this opportunity to share my vision. It's fuzzy and I haven't checked the math yet, but I think at the core, this is something that we should be exploring. I hope somebody takes this idea and runs with it.

There are beautiful new ways of living yet to be discovered. Let's find them!

(Post scriptum: This article was in no way meant to be a diss to Chez JJ. I think they're on to something, I just wish they'd branch out a bit more!)

Internal Emails Show Poorly Planned DEA Raid Wasted Oakland PD Resources While 7 Killed in Nearby School Shooting

$
0
0

At 10:30AM on April 2, 2012, 100 Drug Enforcement Agency (DEA) agents raided 6 marijuana dispensaries in Oakland, California while local Oakland PD officers contained a crowd of protesters. Meanwhile, at 10:30AM in another part of town at Oikos University, One L. Goh opened fire with a .45-caliber semi-automatic handgun and murdered 7 people in the 3rd deadliest university shooting in United States history. Recently released Oakland PD internal emails clearly show that the DEA did not give the Oakland PD sufficient notice about the impending raid, nor did they sufficiently plan their exit from the scene, and that their poor planning directly resulted in overstretching of police resources which led to an understaffed response to the shooting and other high priority calls.


I obtained these documents as the result of a Freedom of Information Act request under California's Sunshine Amendment (care of MuckRock.com). There are only 3 pages, but they are extremely illustrative. I commend the city of Oakland on their swift and accurate processing of this request.

Points of Interest

In these pages, there are a few main things to notice:

  • The first official record of the Oakland PD's knowledge of the impending DEA raid was at 5:39 AM, less than 5 hours before the raid began. (p3)
  • The Lieutenant of Police in charge of that district knew in advance that he did not have enough resources to handle the raid and asked for help from other divisions.(p3)
  • Use of OPD forces for crowd control affected the police response to other high priority calls.
  • The warrant could have been served at night, to avoid use of daytime police forces. (p2)
  • The OPD did not have a point of contact with the federal government for coordinating the raid. (p2)
  • The DEA did not have an 'exit' plan for the raid, which further wasted police time and resources and exposed officers to unnecessary conflict. (p2).

These complaints were made by Oakland PD officer Lt. Kevin Wiley, who also filed them with the federal government.

Conclusions

This documents clearly show that not only do the people of Oakland disapprove of these raids on these dispensaries, which are completely legal under California law, the Oakland Police Department strongly disapprove of them as well. Although it is impossible to say exactly if an increased police presence at the scene of the Oikos University shooting would have prevented such a substantial loss of life, these documents clearly show that the OPD's own officers feel that the raids were a waste of resources which resulted in a lack of response to high-priority incidents.

I have filed another Freedom of Information Act request with the DEA although they have not responded with any documents yet. My request to the Internal Revenue Service returned no responsive documents. Hopefully, the DEA will reply with their side of the story soon enough.

In the mean time, I would encourage you to contact the DEA and voice your concerns about these raids. Please feel free refer others to this article, to reproduce this text and the documents, and to provide your own commentary.


How a Shortcoming of the English Language Doomed Two Decades of Web Design

$
0
0

Register? Nah. Check-in? No. Sign in/up? Nope. Join in? Maaaybe. Enrollogin? Hell no.

We're reworking a whole lot of the site at the moment (you're gonna love it!) and one little sticking point we've stumbled upon is that, due to limits of the English language and nearly two decades of web design trends, we are forced to have two different forms for 'Sign Up' and 'Sign in.' There isn't a word which means "sign in if you have an account or make one if you don't have one already." This is completely stupid and there isn't really anything we can do about it.

Informatically Speaking

From an information perspective, two fields is all we ever need to require from a user: their email address and their desired password.


Like this!

If we want to associate their account with a username, we can already deduce that from the email address - simply take everything before the '@'.

We're a quickly growing young startup - we really just want as many interested people to enroll as possible. If we have to slightly uglify our 'settings' page or burn a bit of energy on support for the percentage of users who don't want to use their email address name as their username, we can manage that. We'd certainly rather have them do that than never join at all because of shitty forms!

In the Wild

Even Tumblr, who have some of the most usable, friendly design on the web, struggle with this problem and have two forms.


Sign up and sign in.


They make the process smoother by having a sexy little javascript animation to tween the two forms, but the process still isn't quite elegant.

Amazon have an interesting solution.


Sign in, without a password for new customers.


Rather than require '3 or 2' fields, Amazon manages to drop it down to '2 or 1' field, with a password being collected later in the checkout process for new users. This makes sense for them with the flow of their shopping cart system, but I still think that this is an inelegant system. Amazon also don't even really require usernames at all, plus it still says 'Sign in' at the top, where some users are actually 'Signing up.'

Facebook Connect - One Step Forward, Two Steps Back


I'd tell you not to click this, but you've probably already figured that out.

There is a new heavyweight in the sign-in ecosystem, known to geeks as 'OAuth' and known to most plebs as 'Facebook Connect.' OAuth is a way for users to use their accounts of other websites as a way to create accounts and sign in on your website.

For instance, as we're a website for open source developers to discover new career and freelance job opportunities, you can sign up for Gun.io with GitHub and we'll automatically make you an account and fill out your profile for you. Neat! The particularly interesting thing about this is that because of the protocol, signing up and signing in are essentially the same thing for the user - one URL performs both actions with a single click and no additional information necessary.. in theory.

Now, besides the fact that OAuth is a complete fucking nightmare to work with, OAuth is completely plagued with other problems. The API, documentation and example libraries are a complete mess, many major websites don't even abide by the proper specifications, the information given back isn't usually verified properly, and the leave-the-site-accept-some-terms-and-then-return-to-it flow is extremely unintuitive.

The largest problem, however, is that most users hate these buttons and won't go near them with a ten foot pole. "Facebook Connect" is now almost universally synonymous with "spam the shit out of my friends and make me look like a fucking idiot." Using this kind of sign-in service was once a really good idea, but Facebook and their tolerance of spammy applications has pissed in the pool for everybody.

It's a damn shame, too, because "Connect" is actually a fairly nice way of saying "sign up and/or sign in", but I imagine the Facebook law machine will rain hellfire down on anybody else tries to use that.


I miss this so, so much.

A Really Lame Time Machine

So, if anybody has a time machine which can only fix trivial problems, please go back in time to 1995 (or whenever it was) and find whoever made the first web 'login' system. Make them only have 2 input boxes and a new word meaning 'Sign in, or sign up if you don't have an account already!' There was so much new jargon going on back then that adding one more new term wouldn't hurt, and I think the savings to all future web sites and web users would be absolutely immense.

Conclusion

In conclusion, we're basically fucked. We've decided to use a swooshy little toggle switch, a bit like Tumblr does, but we're certainly open to suggestions!

What do you think? Have you seen any better examples in the wild? Do you have any better ideas that I haven't addressed here? As a user, how would you prefer to approach this problem?

Leave your answers in the comments below!

Why News Outlets in Different Countries Have Different Olympic Medal Tables

$
0
0

As a native Briton, I've been remarkably proud of how well all of the British competitors have been doing at this Olympic Games. I've been watching all of the events I can and keeping an keen eye on the medal table, and by some stroke of luck or talent, Britain is in third.. depending on who you ask.

Adding it Up

NBC Olympic Medal TableBBC Olympic Medal Table

The NBC Medal Table (Top) and the BBC Medal Table (Bottom)


If you ask NBC, the official broadcaster of the games, who is in first, they'll tell you it's the good ol' US of A. However, if you ask most of the rest of the world, they'll tell you it's China. This is because NBC use the raw count of medals to rank nations, the British media use a weighted (5:3:1) value for different types of medals, and most other outlets, including the Associated Press and anybody who uses the AP for their data, rank by the number of Gold medals acquired.

When I first noticed this, I immediately cried foul play! Surely, I thought, the notoriously partial American media is deliberately skewing the results to show American might. And they are, obviously, however the story is a bit more interesting than that.

Enter the IOC

The masters of the games, The International Olympic Committee, have officially taken no stance on this. Instead, they stress that the Games are not about countries competing against each other, but rather about the athletes competing in their own events. In fact, in their own bylaws, The Olympic Charter, Chapter 1, section 6, states that: “The Olympic Games are competitions between athletes in individual or team events and not between countries."

Furthermore, IOC President Jacques Rogge has said:

"I believe each country will highlight what suits it best. One country will say, 'Gold medals.' The other country will say, 'The total tally counts.' We take no position on that."


Now, as much as I don't like the IOC (largely because of their non-recognition of an independent Taiwan and their history of bribery), I happen to agree with their official policy in this case - it's best they focus on the sport, not the politics. You'll notice that you won't find any medal tables on the official Olympic website.

Around the World

So, the Americans look for total medals. What about the rest of the world? I poked around in various international media to see how they ranked the competing nations.

Xinhua Olympic Medal Table

The Xinhua Olympic Medal Table

At the Chinese news outlet Xinhua, countries are ranked by their number of Golds, and China is on top. No surprises there.

Moscow Times Olympic Medal Table

The Moscow Times Olympic Medal Table

Now, I don't think the Russians care very much about the Olympics or they have other things to worry about (free Pussy Riot!), as it actually took me a little while to find a Russian news outlet which was covering them enough to provide a medal table. When I did, they too used the number of Golds.

Chosun Olympic Medal Table

Chosun Gets it Wrong

I couldn't find any Korean news outlets with a medal table, but I did find the Korean paper Chosun misreporting how they were ranking the Korean nation - Korea is fourth in Gold count, not overall medal count (where they are 9th). It's okay Chosun, it's all very confusing.

Other Ranking Systems

There are even more ways of slicing it! For instance, Simon Forsyth has made a really cool tool for showing how nations stack up when other information about their country, like the overall population or the gross domestic product is factored in. (When population is factored in, Grenada are the real champs of this games. Fuck yeah, Grenada!You go, girl!)

Daily Telegraph Olympic Medal Table

Australia's Daily Telegraph invented a new country.

Here's a fun one: down under in Australia, the Daily Telegraph smooshed Australia and New Zealand together to squeeze them into the top 10. You can't really blame them, as that's what the readers want and you've only got a limited amount of space on a page. Good on you mate!

My Humble Opinion

Now, personally, I think that if we're going to rank the countries at all that we should use a weighted value for different types of medals with a bit of an exponential gradient, so gold medals would be worth 5, silvers two and bronzes 1. The Gold medal is a special thing, and those who earn it should be rewarded for it, but the silver winners should still have some value too.

That being said, I've spent most of the games sitting on my butt watching pirated international feeds, so my opinion probably shouldn't count for very much.

What do you think? Is it wrong for NBC to put the USA at the top while the rest of the world has settled on a different ranking system, or is that okay? Leave your comments below!

Fast as Fuck Django, Part 1: Using a Profiler

$
0
0

If you haven't noticed, we've made some serious updates to gun.io!

As our search results are now rendered on the client side rather than on the server, we had to write a whole new API to get the data into the client. Once we had written it, we found that it was far slower than what we considered to be acceptable - results were averaging at 1.7s - and we shoot for sub 200ms response times. Not good! But what was going wrong? Why was Django being so slow? It actually wasn't easy to see immediately, and the normally wonderful Django Debug Toolbar wasn't any use as it doesn't work on AJAX requests.

Using a Profiler

What we needed was a way to profile our code through the browser. Much digging around on the internet found a very old snippet of a Django profiling middleware which seemed perfect for the tasks. Unfortunately, it didn't work with modern versions of Django, but I quickly fixed it up and am including it below.

To use this middleware, save this file in your application as middleware.py. Then, in the MIDDLEWARE_CLASSES of your settings.py, include the line

'yourproject.yourapp.middleware.ProfileMiddleware',

(with the 'yourproject' and 'yourapp' changed to your appropriate values, of course.) You will also need to install the 'hotshot' package, so if you don't have that installed already then pip install hotshot.

Reading the Output

Now, to use the profiler, simply add the string '?prof' to the end of your URL, and you'll see the profile of that page. Awesome! But.. what does this wall of voodoo mean?

The profiling output is not very intuitive if you haven't seen it before, so let's dive in!

The columns it's showing you are the number of times a function is called, the "total time" that function took, the time per call, the cumulative time of that function, and the ratio of cumulative time to primitive calls. We're interested in 'cumtime', the fourth column. The results aren't sorted, so you'll have to look through the list and see which functions are taking the most time. Anything greater than 0.100 or even .050 could be worth investigating.

If the function has a low per-call time but a high cumulative time, your code is probably spending too much time in a loop - see if you can find a more efficient algorithm, use less nested loops or way to break earlier! The string associated with your problem line may actually not be immediately helpful, but hopefully it will serve as a good starting point for you - if you see many expensive calls to 'sql', it could be your database code, if you see many expensive calls to 'template', it could be your templating code, etcetera etcetera.

In our case, we had forgotten to use a 'selected_related()' on one of our database calls. Now our API is super fast, hooray!

Once you're done, remember to remove the middleware from your settings!

Hopefully this was helpful for you! If you liked this article, leave a comment below and I'll write some more posts on how to speed up Django, maybe one about all of the ways to use caching, one about speeding up the templating and one about building fast models and how to access them properly. What do you think?

Should we move to San Francisco (permanently)?

$
0
0

We cannot be the first band of technology entrepreneurs in a distributed setup to throw our hands up in despair after a painful and perhaps useless four-hour phone call and ask ourselves “Should we just move to San Francisco?” Surely, logistical and communicative frustrations must make up a significant portion of the operational concerns that all start-up founders must confront daily.

The context

Here’s where we’re at – I'm a part of a start-up that connects top tier open source programmers with freelance and full-time gigs – known to those in the know as gun.io. The company, until very recently, was run by a single founder, Rich. He brought me and JP (from whom you’ll hear a few things very soon) to provide some support to you – our user community. Right now we're just three guys – one in Berkeley, California, another in Harrisburg, Pennsylvania, and I live in Nashville, Tennessee. We're growing at a fast clip, so we've got our hands full.

At first glance, a start-up that connects programming talent to sweet technology jobs should absolutely be based in Silicon Valley or the SF/Bay area – the proximity to a large potential user base, strong talent networks, and a multitude of support services, investors, and mentors would all be very helpful. It would likely cut down on our phone and Skype time as well.

The alpha test

As company policy, we make sure to regularly annoy our user base by peppering them with questions and requests for feedback about everything (I understand this is referred to as outreach). So, we all decided to meet up in San Francisco, where a large part of our user base resides, to see if we could hash this issue out. We spent a large part of the week meeting with members of our community, other founders, and advisers and mentors. We consumed our fill of delicious food, strong coffee and good beer each day – things for which SF has been made famous. By week's end, we felt like we had gathered enough data to take the company up a notch, built a pretty strong internal case as to why the storied Bay Area was so storied, and why we should skip our flights back and just stay.

TechCrunch's office

At TechCrunch's office, we're waiting for a tour of the place and a chat. Note the sweet pong table in the background.

The findings

After having spent roughly a week and a half in San Francisco, we decided to head back to the Midwest and to continue to grow gun.io from there. It was a tough choice, but here's why we made it:

We're entirely bootstrapped. We're able to cover our personal expenses through the start-up primarily due to the affordability of life in the Midwest. This isn't merely an issue of personal comfort – we believe that operating under the stress of personal austerity can hamper our ability to make wise and deliberate decisions for the future of the business. Moreover, it frees up more cash to reinvest back into the company – potentially allowing us to step up our marketing efforts or to bring on some more people sooner. Raising money is a potential solution, but we feel it's premature from a strategy standpoint – it brings in too many stakeholders too early and will layer on an additional level of complexity to the business's operations.

We want to enable programmers to pursue the attachment-free lifestyle of a freelancer, unconstrained by location or routine, so that they may work on projects they find personally fulfilling, not just financially necessary. We feel that it would be inauthentic to on the one hand champion this lifestyle, but submit to the demands of corporate expediency in our own. As our community expands, we will have to reconsider this point, but for now, we think we should embrace the lifestyle that we in part encourage – if even purely out of a concern for our personal integrity.

Chicago to San Francisco is a fairly painless route. And it's one that's doable each month for a week on the cheap. What money we save on rent, food, and general living can be spent in a guilt-free manner on the trip over with a good amount of company surplus cash to spare. We can spend the rest of the time grinding out sales via other means, and assess our results as a team in San Francisco on a monthly basis. It's possible that “selling” would prove easier if we were face-to-face with the very large customer base in the Bay, but we've been achieving consistent week-on-week growth so far. Plus, many of our current customers and hackers aren't even US-based. It's our belief that our awkwardly distributed stateside setup will help us continue to build a geographically diverse community of talented hackers and good gigs. Moreover, we'll still be in San Francisco 25 percent out of the year.

And finally, perhaps my strongest point and one that needs the least defending: there have been several successful companies with distributed founder teams, spread across not just a single country, but the world.

As it stands, the recent decision NOT to move was made with a statistically insignificant data set (a hallmark of lean start-ups, I suppose), so I think it's appropriate to consider the above points as slightly-validated assumptions rather than conclusions. In the coming weeks, if we're confronted with data that speaks to the absolute contrary of what's written above, well, we're movin' to San Francisco and that's pretty much that. Until then, we've got an incomplete, but actionable answer: we'll see.

My take

A prescriptive lesson we took away from this is that start-ups should treat a move to San Francisco, or any other city for that matter, as an intermediate necessity rather than the goal. The goal is to grow the business, and a move should only happen if it measurably contributes to that end. However, as entrepreneurs, we have a tendency to put the cart before the horse (e.g., attitudes toward raising venture cash), because we can visualize the action and outcome on a visceral level. Executing on these intermediate goals therefore makes us feel like we're headed to success, even if the user growth or sales metrics may indicate otherwise.

However, start-up advice is cheap, especially when it is presented by way of partially self-promotional anecdote and when it comes from an unproven entrepreneur on a topic about which he has much to learn. So, I think it makes sense to open it up to the floor for discussion: other entrepreneurs who have asked and answered this question – whether to move or not – what did you decide and how did you come to that decision?

Social Entrepreneurship

$
0
0

I know that many of you are working on important social initiatives on the low. With this blog post, I’m hoping to encourage you guys – our users – to reach out to us to have your work showcased here. Since we’re software-heavy at gun.io, I’ll focus on projects in this realm for now, but feel free to contact us if you’re doing cool work in an adjacent field.

Long ago, before this life, I undertook a rigorous study of developmental economics during my youth. What I mean by that is I took a few college seminars on the subject and argued with friends as if I were an expert political theorist. Today, the more I learn about technology and entrepreneurship, the more convinced I am that entrepreneurs, specifically of the Internet era, are in a unique position to promote local development and social change, not just through expensive and expansive institutional efforts, but simply by improving technological literacy at the individual level.

Hear me out – the wiki definition of social entrepreneurship says that the practice involves the use of entrepreneurial principles to catalyze social change. My take is that if you are doing well as an entrepreneur and educating others on the process, you are at least in one capacity, performing social entrepreneurship by giving people a means by which they may create value and wealth. Your entrepreneur day job is a live drill (to borrow a term from martial arts) to hone the skills you’ll eventually pass on. To some extent, I think we can all be social entrepreneurs without quitting our day jobs or foregoing our secret desire to own an all marble private jet, as long as we pay our success forward.

Currently, websites such as stackoverflow.com, khanacademy.org, and even reddit.com/r/learnprogramming are acting as solid starting resources for the next generation of software developers across the world. Sites such as ours and a few others are creating marketplaces for these freshly-minted programmers to sell their wares. I submit that it takes many years of dedicated work to become a talented software engineer, but I’d argue that the skill threshold to achieve wage-earning competency is fairly low relative to other skill-intensive professions (And yet, you do get what you pay for). What I’m getting at is that programming is a supremely accessible pursuit – there is little monetary investment required in becoming an extremely talented creator of stuff. This is far from the case in law, medicine, or other vocational expressions of the sciences – you must typically assume gobs of debt on top of spending many years to even have a chance at earning a high wage.

However, for programming to be a ticket out of poverty, you’ve probably already got to be middle-income status, or at least have easy access to computers. Enter cool initiatives such as One Laptop Per Child or even this little US$25 machine that are increasing the proliferation of technology throughout the developing world.

At the institutional level, technology is promoting more responsive government in India. In Chile, the government is trying to catalyze local economic development by enticing immigrant technology entrepreneurs with free housing, and a check for US$40,000. At home in the US, our government has adopted lean start-up principles in an effort to improve the quality of their governance. (While I myself tend to be slightly skeptical of the last program, I do think it’s a good start.)

If you’ve read this far, you’re either my grandmother or someone who’s interested in helping promote economic or social development through entrepreneurship and/or technology. So seriously, if you’re doing some cool work in this field I encourage you to contact us and we’d love to showcase your work here.

Introducing Keen IO!

$
0
0

Teja here - hopefully this doesn't get too confusing, but we're trying something at gun.io HQ in an attempt to introduce you guys to a few neat products. We'll run a few guest blogs in the coming weeks from a variety of engineers in the form of tutorials. This first post is by Daniel Kador, CTO of Keen IO, a data science company.

Hello!

I'm Daniel Kador, CTO and co-founder of Keen IO. Prior to starting Keen IO a year ago, I was a lead engineer at salesforce.com, where I helped to build their APIs.

About Keen IO

At Keen IO, we help app developers build custom analytics and data science features directly into their mobile apps and web dashboards. We provide the infrastructure and APIs to collect data and build analytics into your business.

We created Keen IO because we needed a customizable yet scalable analytics store at our old jobs, and we weren't happy with anything currently on the market. Just as Twilio made it so developers never had to write SMS software again, and as SendGrid made it so developers never had to deal with the headaches of email deliverability, we're making it so that developers don't ever have to build analytics infrastructure again.

What this means is that you no longer have to build your own custom analytics solution. This is the key difference between us and other Analytics-as-a-Service companies. They all do a good job, but they want you to login to their dashboard so you can create and view a standard set of charts. You're out of luck if you want a new type of chart. And don't even try to embed a chart in your site or app - everybody else is interested in keeping your data siloed in their dashboard. So you end up having to host the data yourself AND create a way to analyze it.

That's no good. We've been down that path before, and, quite frankly, it sucks.

So how does Keen IO help? We have three main APIs:

  1. APIs to collect data from your mobile app, web app, backend server, or almost any other source. Sending data to us is both easy and fast.
  2. APIs to analyze the data you've collected. There's no use in collecting data unless you can start crunching it, right?
  3. APIs to visualize the results of the analysis. Raw analysis results can be interesting, but everybody loves a pretty chart!

Let me give you an example of how one of our customers is using Keen IO today.

There's a great new company called Kickfolio. Our buddies there created a lovely way to run your fully native iPhone app in the browser. You'd never know it, but Keen IO powers the analytics charts on every Kickfolio app page. Kickfolio uses Keen IO to track user sessions and display engagement graphs for every app on their platform. Learn more about how Kickfolio uses Keen IO to provide seamless analytics dashboards to their customers.

Let's Play!

Ready to dive in and actually use the service? Let's get started! I'll walk you through getting an account at Keen IO and then using our fresh JavaScript SDK to send, analyze, and visualize data. Don't know how to use JavaScript? Check out our docs for other environments (including iOS, Android, and Ruby).

Signup!

Your first step is to sign up for Keen IO. Go here and click on the big "Signup with GitHub" button.

Create a Project

Back? Awesome! Now let's get you to create a new Project for this getting started guide. At Keen IO, a Project is essentially a data silo. Practically speaking, in the mobile world, a Project holds the data for one app. Click here to create a new project. Name it something fun. I'd suggest "Edward and Bella's Wild Ride" or "Beiber's Favorite Hair Salon".

Okay, you've created your creatively named Project? Take a look at the Project Settings page and note your Keen IO Project ID and API Key. You'll need these in the next step!

Configure the JavaScript SDK

Copy and paste the following <script> tag inside your HTML page's <head> element:

This bootstraps the Keen IO JS SDK's interface. Note the last line before the closing </script>. Replace "your_project_id" with, well, your Project ID. Same for API Key. And then you're good to go!

Send An Event

Okay, you've created your creatively named Project? Time to actually send us some data, in the form of an Event. Events are the actions that are happening in your app that you want to track. Events are stored in Event Collections.

Let's add an Event to an Event Collection. Again, you get to decide what to name this sucker. I'm going to call mine "purchases", but I'm sure you can do better. All you have to do now is send an Event, which in JS is the same thing as a JS Object. Here's how:

That's it! The JS library will asynchronously send this off to Keen IO where we'll store it.

Analyze and Visualize Your Data

"Pffffffft!", you might be saying. "Storing data's easy! How do I run analysis on this stuff?" Easy, tiger. You're just one step away. Check out this code that tells you how much money has been spent on your "purchases" (remember to change the Event Collection name to whatever you used!) over the last seven days:

Once the JS library has been fully initialized, this will send an HTTP request to Keen IO and we'll respond with the sum of the values in "item.price" over the last seven days!

Not cool enough? Want to visualize your data? Okay, let's do it.

Let's take the query we just did and put a chart on the page. The first step is to declare an HTML <div> element with an id that we can reference. Try something like:

Now let's modify the function we passed to Keen.onChartsReady() to draw the Metric instead of just getting the raw response:

Reload your page and you should see a nicely styled number of your total revenue! You can add visual customizations, like changing the background color.

Let's go one step further. What if you wanted to see your revenue over the last week, but broken down day-by-day? Easy! We'll just create a Series instead of a Metric. Check it out:

Refresh and voila!

This is just the beginning of what you can do with Keen IO - if you're ready to do analytics right in minutes instead of months? I invite you to take a closer look at our docs. Or chat with our team in our users chat. And get in touch with me personally, any time. Follow me on Twitter.

An interview with Virgil Archer

$
0
0

Today, we spoke to Virgil Archer, one of the guys behind the TechYizu and the Startup Leadership Program in Shanghai, China. For all of you developers, entrepreneurs, and nomads considering heading out East in search of adventure and riches – this one’s for ya’ll.



Virgil splits time between working his venture capitalist gig for a noted telecom fund, building a robust startup ecosystem in China, and frequenting happy hours in Shanghai’s storied expat spots. We caught him on a Friday night – he made sure to let us know that he passed up one such opportunity to field our call.

Gun.io: Can you give our users some intel on why they should listen to you?

Virgil: I was involved in the tech community in Boulder, where I went to school. After I graduated, I worked as a banker in London, and then found myself working for an Internet startup in China. I parlayed that position into a gig for one of our lead investors, which later turned into a full-time role. I split my time now between my job, TechYizu, where we put on tech-related events, and hosting workshops for entrepreneurs. As a foreigner in China, my experience isn’t unique – job- and industry-hopping is par for the course in a labor market as fluid as China’s.

Gun.io: Can you identify a few peculiarities in launching a business in China?

Virgil: From the perspective of an entrepreneur, even if you prove traction, it’s hard to get taken seriously by institutional money in China if you’re lacking a native, Chinese, co-founder. If you scale to a medium- or large-sized company, and the government takes notice, you’ve almost got to have a member of the Communist Party as one of your C-level executives. You see many foreign founders getting forced out during Series-B financing for this reason. From an investing perspective, founders will accept terms that would never fly in the US. For example, it’s common to have term sheets that, in the event of liquidation, provides the firm claw-backs, but also permits them to go after founders’ personal assets. On top of all this, there’s also mimicking of effective business models. 36kr.com, for example, originally sourced much of its content from TechCrunch.

Gun.io: Could you explain this ‘mimicking’ process a little more?

Virgil: A while ago, group buying was popular. Then, it was ecommerce. More recently, you’re finding Pinterest clones. Entrepreneurs typically accept having clones spring up as a part of bringing new products to market.

Gun.io: Are there many foreign entrepreneurs in China? What’s the ratio of foreign entrepreneurs to local entrepreneurs?

Virgil: A foreign founder of a fairly successful service business in Shanghai jokingly said that foreigners – men especially – when they come to China, realize that they’re treated and paid exceptionally well by locals. They’re invited to the hottest clubs, and often hang out with very wealthy people. This sort of inflates their ego and gives them the impression that they could easily launch a business in the country too. On the Chinese side, there are cultural expectations that demand that children – again, men especially – are financially independent and able to support a wife and two sets of parents by age 30. These obligations make the initial periods of being an entrepreneur especially challenging, and that’s why many Chinese entrepreneurs never get started. Because these trends are occurring simultaneously, you tend to see more foreign entrepreneurs than local ones in China, for now at least. That said, foreigners rarely quit their day job, even if they’re finding that their project is getting traction. Projects are likely to flame out and even their support networks are far away.

Gun.io: How’s the outlook for Asia in general from the startup perspective?

Virgil: Singapore’s very supportive of new businesses – they make it easy to file your documents and receive grant money. Indonesia looks promising. For now though, China is the place to be. I think we’ll see more Internet companies listing on the Shenzhen exchange, instead of going to the NYSE and using esoteric techniques such as reverse mergers that obfuscate their financials from foreign investors. Currently, Chinese companies have to keep two separate sets of books – one for themselves and one for the government – a practice which understandably worries foreign investors. Successful start-ups are typically started by former employees of large, reputable technology companies such as Tencent and Alibaba – sort of similar to what you see in the US with alums of Google, Yahoo, etc.

Gun.io: Can you specifically speak to hiring practices in China?

Virgil: HR is a hell of a thing, at least in China. Once you hire someone, it’s almost impossible to get rid of them. You can get dragged through courts fairly easily, and all the while, you’re paying their salary. I’d recommend having a Chinese co-founder that can pull contacts from his social circle.

Gun.io: What general advice can you give to entrepreneurs looking to scale a business in China?

Virgil: Make sure you have a Chinese engineer as a co-founder. There are no open sources of industry-wide data comparable to Angel List in China quite yet, but 17startup.com does put out an annual report to do some research.

Edit: Previously, this piece incorrectly quoted Virgil as saying that 36kr sourced all of its content from TechCrunch. His comments have been edited.


Algorithmic Creativity

$
0
0

Today, we’ve got a spot by Re-Compose, an Austrian company that creates software to analyze and re-synthesize digital music. Liquid Notes is the company’s first product. The company’s chief developer, Stefan Lattner, is here to chat.

What inspires you about your work?

I’m interested in both Artificial Intelligence and music, and that led to me bringing these fields together to try to help computers better understand musical input.

What does this mean in the context of your product, Liquid Notes?

So, today, many people want to create their own music. There are roughly thousands of plug-ins that do this. Most of these plugins, however, deal with sound synthesis or manipulating signals. Very few allow you to create actual notes, upon which tracks are built. So, users with little composition experience are completely left alone when it comes to designing a song from scratch. They can draw on rather static loops, but this doesn't allow for much personalization. We tried to offer a way for musicians to manipulate their pieces on a level between being able to create single notes or entire unchanging loops.

Can you explain how your program works in slightly more detail?

Sure, so a musical piece, opened and manipulated in Liquid Notes, passes through three consecutive steps – all of which can be considered within the scope of Artificial Intelligence. First, single tracks of the input arrangement are classified into musically relevant classes like Melody, Harmony, Bass, or Drums, this process is necessary for both the subsequent harmonic analysis and the re-harmonization. It’ll classify through properties like polyphonic density, average note length, or variance of pitch.

The next step is the harmonic analysis, which was developed by one of our guys currently living in Hollywood. A detailed description of his input would probably be too involved, but suffice to say it's a combination of looking up which notes are in the piece, weighing them for harmonic relevance, dividing the whole song in areas with valid chords, and throwing out ambiguous choices by comparing them with detected scales and probability tables (i.e. what is the probability of a certain chord following another).

The last step, re-harmonization, is a combinatorial problem with a large search space and sometimes more than one optimal solution. The optimal types of algorithms for such problems are heuristics. Such algorithms are quite convenient because it is only necessary to define a fitness function assessing different solution candidates. The optimal solution doesn't have to be known. It is sufficient to know if a solution candidate is better or worse than the other one.

Is this the future of music then? Do you see it as being algorithm-driven?

With an ever increasing pressure on producers to deliver music faster and at much lower cost, algorithm-driven music production will play a very important role among composers for the media (TV, film, commercials, music libraries, computer games, etc.) as well as creators of electronic club music.

However, the big revolution in music is currently happening at much lower ends of the scale, namely where the iPad and other tablets and devices that enable a large portion of the population to make inroads into composing and music production. Algorithms will allow these people to get acquainted with the basics of music and then, step by step, to advance. They might even utilize that kind of technology to reach a professional level, although it is too early to make any predictions if that can be accomplished through technology alone.

So in parallel to catering to composers and producers of "traditional" digital music, we see our mission in kicking off an entirely new paradigm in music creation through our future technologies. We don't intend to take away the magic from well established and time-proven methods of music making but to extend the spectrum of creative possibilities beyond current limits.

ReCo's Vision

At Re-Compose, we picture ourselves as technology suppliers for developers of end customer applications. So our algorithms could be delivered in the form of an SDK, some kind of "blackbox technology", or in parts even as open source code to be integrated in software and hardware in need of music analysis and resynthesis. The span of conceivable application would be limitless.

RTC in your app with AddLive: Part 1

$
0
0

Ted, the "tech guy" from AddLive was born to be a media streaming hacker, went through some hard times with Java, C/C++ and now seems to be fallen in love with JavaScript and Python.

AddLive is a simple, developer friendly way to integrate live video, voice and text chat into applications by supplementing the emerging open WebRTC standard. You can read more about us here.

What's this all about?

In a nutshell, the AddLive SDKs allow pretty much anyone to add high quality, low latency, video conferences to their applications. Whether the app is a web app, mobile (Android/iOS) or even a plain old desktop application. We started working on our platform ages before even the term WebRTC was coined, but we've quickly (IETF '80, Prague) found the potential of the technology and decided to jump onto this boat. We now offer technology powered by native WebRTC in several flavors: a JavaScript library built on top of a browser plug-in, Objective-C framework (PhoneGap plug-in on it's way!), Java Jar with native libs for Android and native C library for desktop applications.

Cool, now that the marketing part is complete, let's introduce some juicy details on how this stuff actually works. The biggest thing about AddLive is that we're focusing on a multiparty video conferences. With AddLive, John doesn't call Bob, but instead both Bob and John connect to the same scope on our streaming server. Those of you already familiar with FMS/Red5 will feel finally back home :). This approach has one pretty good side effect for developers - when creating applications powered by AddLive, you don't need to be worried about any signaling or session establishment.

It also makes us pretty good at multiparty quality. One of the platform key features is the quality adaptation. By measuring CPU use and network utilization we make sure that your users are having best experience possible, using the available resources. Another handy feature of the SDK is that covers also networking traversal. We use multiple media streaming protocols, from those offering the best quality (SRTP overt UDP in P2P if 2 in scope, relayed if more) to the best effort ones (HTTPS streaming relayed; including passing through proxies).
Finally, the AddLive SDK allows you to perform screen sharing sessions and use messaging exchange using low latency, reliable sendMessage API.

See it in action

To get wet with the SDK let us implement super simple service for testing API credentials. The app asks user for application id and API key, then it:

  • initializes the platform
  • starts local video preview
  • connects to streamer with fixed scope and both media published
  • displays remote user video feed when someone connects the same scope

Fully working example can be found here(JsFiddle).

Document structure

First of all, we need some document to host the application. We'll start from creating a dead simple HTML page. Head snipped attached below:

We use head only to include the external resources - of course jQuery and AddLive SDK, then bootstrap as we do things in style. Finally there is some stuff specific to this particular app - layout (styles.css), functionality (scripts.js) and sha256 helper lib (required to calculate auth signature).

To met it's functional requirements, our little application needs to have 2 UI widgets - one for rendering users' video feeds (local preview + remote feed) and a control section.

From AddLive point of view, to render video you just need to prepare any block node (<div>, <tr>, <section>, <article>, etc.) with explicitly defined id. In our case, we're using 2 render containers - #renderLocalPreview and #renderRemoteUser.

App functionality

Fun starts when user enters the API credentials and hits the start button. First thing we need to do is the platform initialization:

The biggest part of initialization is implementation of an asynchronous PlatformInitListener interface. Here we just define handler for state changed event, in which we show plug-in installation button (only if required), and then just proceed with app setup after the platform is ready. Once we have the listener, we can request the SDK to initialize using ADL.initPlatform with previously defined listener and some init options, where we specify the app id.

Once this step is ready, we can start local video preview and set global events listener.

To understand the next part of application functionality, let's take a step back and have a look in some broader aspects of working with the Platform.

AddLive SDK provides its services through the AddLiveServiceinterface. You can easily obtain it using the ADL.getService method. Also each method is asynchronous and returns results using ADL.Responder class. To simplify this process, we have prepared simple factory method ADL.r, which takes 2 optional functions - success and error handler. That was tough! But once you get it, you can use AddLive on any platform!

To deal with the video feeds, we need to introduce 2 concepts: a video sink and a video renderer. In AddLive, the video sink represents a source of raw video frames - e.g. local preview feed or feed constructed from decoding remote video stream. Each video sink has a unique id, which then can be used to render it, using the ADL.renderSink method. This method, using a video sink id and a container id, creates a rendering widget that fills the container completely (read: {width:100%;height:100%}). Also, please note that each video sink may have multiple renderers

Phew! Finally, we can go back to local video preview. Implementation below:

Starting local video is a two step process: we need to start the video capture and then render it. First we need to create success handler (all is async!). The function we define there, simply takes the sink id as only input parameter, and then requests the platform to render it using #renderLocalPreview div. You can also see that we're setting the mirror attribute to true - it improves user experience as people are used to see themselves in a mirror.

Once we have the success handler defined, we can request the SDK to start the local capture, using the AddLiveService.startLocalVideomethod. Please note that to keep this tutorial concise, we won't cover here capture devices configuration, thus AddLive will use the default webcam.

Foreword

Since this little article is just a beginning of a short series we'll stop now. Next time, we'll continue with this sample application and will try to cover the core of the AddLive platform - streaming. If you're eager to learn more about the platform without waiting, to know more about our services or anything related to video streaming on the web feel free to contact us via our community portal, or just ping me directly over twitter: @stdtdk.

RTC in your app with AddLive: Part 2

$
0
0

Ted, the "tech guy" from AddLive was born to be a media streaming hacker, went through some hard times with Java, C/C++ and now seems to be fallen in love with JavaScript and Python.

AddLive is a simple, developer friendly way to integrate live video, voice and text chat into applications by supplementing the emerging open WebRTC standard. You can read more about us here.

Part 1, TL;DR version

  • AddLive allows anyone to add voice and video conferences to their web, native mobile or desktop apps.
  • The framework is connectivity-driven - people don't call each other but instead join same virtual room (scope in Red5/FMS)
  • We've started creating an application allowing one to test API credentials by creating a 1 to 1 chat room.
  • You can access the AddLive features using the AddLiveService interface you can get by calling ADL.getService()
  • All methods in AddLive are asynchronous and take an instance of ADL.Responder object, which receives the result or error.
  • To deal with video rendering we have 2 concepts - video sink (source of frames, e.g. local video or remote video stream) and video renderer (UI widget that renders associated sink).
  • You can read the full story here and more about us here.

About the Part 2

In the previous post, we've learned some basic stuff about the AddLive platform. Using sample application that connects anyone to the same scope, we've described the document structure, and showed how to use the platform in "1 Player mode".
With this part, we're moving forward and will learn how to play multiplayer :). We'll finish talking about the sample application by covering 2 most important aspects related to the streaming functionality - handling global AddLive Service events and establishing a new connection.

Since we're describing the same application as in the previous part, you can find the complete source code in the same location hosted by JsFiddle here.

AddLive global events

AddLive SDK dispatches several events related to hardware devices, connections or video sinks. Using those events an application may be notified about things like a speech activity, new devices plugged in or out (yes, we do support "hot plug"), what are the dimensions of a particular video sink plus a whole bunch of events related to a particular connection - new user joined the scope or user left it; media stream published or unpublished; user sent message etc. To receive events from the AddLive, client application should create an instance of the AddLiveServiceListener interface and override methods of interest. Once prepared, the service listener can be registered with the platform using the AddLiveService.addServiceListener method.

Following snippet contains a function that prepares and registers the service listener for our sample application:

123456789101112131415161718192021222324252627282930
functioninitServiceListener(){
log.debug('Initializing the AddLive Service Listener');
 
// 1. Instantiate the listener
varlistener=newADL.AddLiveServiceListener();
 
// 2. Define the handler for user event
/**
* Handles events dispatched when new remote participant joins the scope or
* existing one leaves the scope.
*
* @param {ADL.UserStateChangedEvent} e
*/
listener.onUserEvent=function(e){
log.debug('Got new user event: '+e.userId);
if(e.isConnected){
ADL.renderSink({
sinkId:e.videoSinkId,
containerId:'renderRemoteUser'
});
$('#remoteUserIdLbl').html(e.userId);
}else{
$('#renderRemoteUser').empty();
$('#remoteUserIdLbl').html('undefined');
}
 
};
// 4. Register the listener using created instance and prepared result handler.
ADL.getService().addServiceListener(ADL.r(connect),listener);
}

In our super simple application, we care only about a single event - UserEvent. The onUserEvent. method like all other event handlers receives just a single parameter - instance of the UserStateChangedEvent. The UserStateChangedEvent object completely describes any change of a remote user state. It notifies whether the user joined or left the scope (isConnected true or false), what type of media streams are published by the user ({audio,video,screen}Published boolean flags) and how to render video feeds if present (screenSinkId and videoSinkId properties).

The handler we've created, when receives a new user event it simply checks whether the event is related to user joining or leaving the scope. In case of new user, it will try to render the video feed using the ADL.renderSink method. On the other hand, if the event is related to the user leaving the scope, the handler will simply clear the rendering widget.

Once we have prepared our listener, we can register it using the addServiceListener method. Upon successful registration, we can finally try to establish a connection (described below). It is important (I mean, really important) to note that the application _never_ should try to establish a connection without any listener registered first. Since inside AddLive SDK pretty everything is asynchronous it is possible that some user events might be simply lost, if an event listener would be registered after establishing connection.

Connectivity

It took us a while, but we're now finally ready to establish a connection to the streaming service! Below is the snipped covering it:

1234567891011121314151617181920212223242526272829
functionconnect(){
log.debug('Establishing a connection to the AddLive Streaming Server');
 
// 1. Disable the connect button to avoid connects cascade
$('#connectBtn').unbind('click').addClass('disabled');
 
// 2. Prepare the connection descriptor by cloning the configuration and
// updating the URL and the token.
varconnDescriptor={};
connDescriptor.scopeId=TEST_SCOPE_ID;
connDescriptor.authDetails=genAuth(
TEST_SCOPE_ID,genRandomUserId(),
$('#appIdInput').val(),$('#apiKeyInput').val());
 
// 3. Define the result handler
varonSucc=function(){
log.debug('Connected. Disabling connect button and enabling the disconnect');
$('#localUserIdLbl').html(connDescriptor.token);
};
 
// 4. Define the error handler
varonErr=function(errCode,errMessage){
log.error('Failed to establish the connection due to: '+errMessage+
'(err code: '+errCode+')');
};
 
// 5. Request the SDK to establish the connection
ADL.getService().connect(ADL.r(onSucc,onErr),connDescriptor);
}

The connect function from high level: disables the connect button to avoid multiple connect requests, creates a connection descriptor and a result handler, then finally calls AddLiveService.connect. All sounds trivial, but let's focus on the most important part - connection descriptor. It is the only parameter passed to the connect method, and it describes how to connect completely. It defines the id of scope to which client tries to connect, as well as the complete authentication signature (more on that below). Those two are the only mandatory attributes of the connection descriptor - rest is taken from so called "sane defaults" and configures how to publish media streams. You can read more about the optional parameters here.

The last thing to be covered with this (already too long) post is a connection authentication. Authentication serves for 2 purposes - to make sure that no one uses your account without your knowledge and to ensure that only users you allow can join particular scope (to avoid e.g. eavesdropping). During the sign up process, you'll receive your own application id and an api key. Using those data, you are required to authenticate every connect attempt your users do. To create auth signature you should:

  • Create a string by concatenating (in order): id of an application, id of a scope, id of user, some random string - salt, signature expiry timestamp (UTC) and finally API key
  • Calculate SHA256 check sum from it
  • Create the authDetails object, containing: userId, expiry timestamp, salt, and finally the signature

You can read more about the authentication scheme here.

Foreword

I guess, that's would be it. After reading those 2 posts, you should be able to get wet with the AddLive SDK and start working on your own gear. If you're interested in knowing more about the platform, feel free to user our documentation: http://www.addlive.com/platform-overview/. You can also find more interesting tutorials on our GitHub. Additionally, you can always contact us via our community portal, or just ping me directly over twitter: @stdtdk.

A conversation with Mike McGee of The Starter League

$
0
0

Their program features in-class learning, mentorship, events, and workshops that help take batches of beginners to an intermediate programming proficiency over the course of 3 months. They’re backed by 37signals, and the founders themselves are graduates of Northwestern University. Recently, they closed a deal to teach local teachers how to code and how to teach code, partnering with the mayor of Chicago. Doing so, they’ve effectively created both a new industry and educational model, all the while educating a new generation of programmers. So, if you haven’t checked out The Starter League yet, you might want to do so!

I thought what The Starter League was doing was pretty cool, so I emailed Mike McGee, one half of the founding team, last week to tell him that. He replied graciously and we got on the phone together.

Today, on Dispatches from the Prairie, we've got a few points from that call:

Everyone wants to code. After paying their teacher, leasing a space, and purchasing computers and other supplies, Michael and his co-founder Neal expected to have exactly $8.00 in the bank. They ended up with roughly 5 orders of magnitude more than that after launching their first class. Six quarters since their 2011 launch, they’re crushing their own projections.

Networking isn’t beneath you. In the technology world, this word has a slimy connotation, conjuring up images of dudes in oversized suits forcing their business cards onto you and each other. For Mike and Neal, however, their connecting prowess served them well. When they launched, they launched from the top due to the strength of their network – having shared an office with Groupon, forged a partnership with 37signals, and courted the support of Rahm Emanuel, Chicago’s notoriously no-bullshit mayor.

Founding labels are moving targets. Neither Neal nor Michael had a programming background. In fact, the duo was searching for an intensive series of introductory seminars on programming, and when they couldn’t find any that actually existed, they taught themselves – creating in the process probably the best possible use case for their curriculum. Each day on Quora and other communities, there’s a new question about “finding a technical co-founder” -- the stock answer might just become “go take classes with The Starter League and become one yourself.”

Write a script for Pilot SSH in Python

$
0
0

Geoffroy, a freelance security geek, enjoys useful tools and useless hacks. When he is not busy looking for new ways to protect applications, he launches fun new projects.

Usable SSH on phones with Pilot SSH

Pilot SSH is an iPhone app for server administration, but not yet another shell app. Instead, it generates a simple user interface to launch server side scripts and display their results.

I tried multiple SSH clients for smartphones (Prompt and iSSH for iPhone, ConnectBot for Android, and even PocketPutty for Windows Mobile), and it felt clunky: maybe the shell was not the right way to interact with a phone. So I started working on a better way to manage my servers.

Most of the administration tasks can be abstracted away in a script, so why not take advantage of that and create a nice UI for them? That way, the keyboard will not be the first mean of interaction with a server.

Thanks to this idea, flushing a cache, upgrading a WordPress website or restarting a web server can be done in a few taps on your screen! This is very useful for quick fixes, when you're on call in a bar, or when you want to hide from your significant other the fact that you're still working on vacation.

The application is currently available for iPhone, and there will be an Android version very soon.

How to Use It

Pilot SSH launches scripts stored one the server side, and by parsing their (JSON) output, generates the user interface. The scripts are open source, and can be downloaded and shared on Pilot SSH's Github repository. Any language can be used to develop them, as long as they conform to the script API.

Let's make a simple script to illustrate this. How about a Python script displaying the network interfaces?

First, we will create the ~/.pilotssh/index file:

#!/bin/bash

echo '{ "version": 1,
          "title": "Commands",
           "type":"commands",
        "values" : [ { "name" : "Network interfaces",
                      "value" : "",
                    "command" : ".pilotssh/network/network.py"
                     }
                   ]
       }'

This file is the first script called by Pilot SSH after connecting. This script will create one table line in the interface. If you touch this line, the script .pilotssh/network/network.py will be launched (note that you can put an absolute path if you want). You can remark that the title attribute writes to the window's title, and that every hash in the values array creates a line in the table view:

index

Now, let's create network.py. You must put a shebang at the beginning (#!/usr/bin/python) and make the file executable. The script will need the netifaces package, so install it through easy_install.

#!/usr/bin/python
import netifaces, sys

def command_from_interface(intf):
    return '{ "name" : "' + intf + '", "value" : "'+ netifaces.ifaddresses(intf)[netifaces.AF_INET][0]["addr"] + '", "command" : ".pilotssh/network/network.py ' + intf + '" }'

def index():
    result = '{ "version": 1, "title": "Network Interfaces", "type":"commands", "values" : [ '
    interfaces = netifaces.interfaces()
    length = len(interfaces)

    if(length >= 1):
        result += command_from_interface(interfaces[0])

    if(length > 1):
        for i in xrange(1, length):
            result += ', ' + command_from_interface(interfaces[i])

    result += ' ] }'

    print result

def command_from_key_value(key, value):
    return '{ "name" : "' + key + '", "value" : "'+ value + '", "command" : "" }'

def interface_info(intf):
    result = '{ "version": 1, "title": "' + intf + '", "type":"commands", "values" : [ '

    af_inet = netifaces.ifaddresses(intf)[netifaces.AF_INET][0]
    address = af_inet["addr"]
    result += command_from_key_value("IP", address)

    if "broadcast" in af_inet:
        broadcast = af_inet["broadcast"]
        result += ', ' + command_from_key_value("Broadcast", broadcast)

    netmask = af_inet["netmask"]
    result += ', ' + command_from_key_value("Netmask", netmask)

    mac = netifaces.ifaddresses(intf)[netifaces.AF_LINK][0]["addr"]
    result += ', ' + command_from_key_value("MAC", mac)

    result += ' ] }'
    return result


if(len(sys.argv) == 1):
    index()
else:
    print interface_info(sys.argv[1])

This script will first generate an index of all the interfaces, and provide a command to get more information on each one of them. I could have used a library to generate the JSON, but this example is simple enough, so I wrote the output directly.

There is a common pattern in the scripts: first, generate the header (title, etc), then gather information, then loop on the information to fill the values array.

Network interfaces

Here is the JSON generated from the index for my machine:

{ "version": 1,
    "title": "Network Interfaces",
     "type":"commands",
  "values" : [ { "name" : "lo",
                "value" : "127.0.0.1",
              "command" : ".pilotssh/network/network.py lo" },
               { "name" : "eth0",
                "value" : "10.0.2.15",
              "command" : ".pilotssh/network/network.py eth0" },
               { "name" : "eth1",
                "value" : "192.168.56.101",
              "command" : ".pilotssh/network/network.py eth1" }
             ]
}

You can see that I added a value. It is be displayed in yellow in the table. The command key indicates the command sent if we touch one of the lines.

Let's see the result for .pilotssh/network/network.py eth0:

eth0 interface

And here is the corresponding JSON output:

{ "version": 1,
    "title": "eth0",
     "type":"commands",
  "values" : [ { "name" : "IP",
                "value" : "10.0.2.15",
              "command" : "" }, 
               { "name" : "Broadcast",
                "value" : "10.0.2.255",
              "command" : "" },
               { "name" : "Netmask",
                "value" : "255.255.255.0",
              "command" : "" }, 
               { "name" : "MAC",
                "value" : "08:00:27:e9:ae:4e",
              "command" : "" } 
             ]
}

There is no command here, so touching the corresponding lines will not send any command. But we could quite easily use the query attribute to change the value, add a button to put up or down the interface, reload the DHCP, etc.

As you can see, writing a script for Pilot SSH is very easy (seriously, it took me more time to write this article than to write the script). More scripts can be downloaded on the Github repository, and you can use the issues to discuss new script ideas and future features of the application. By the way, I already pushed this Python script on the repository, so feel free to fork it and contribute!

There is a lot more coming for this app in future versions, and it will get more and more useful with all the scripts people will develop and share.

Viewing all 592 articles
Browse latest View live