There are a lot of great talks by Sam Aaron the creator of Sonic Pi on Youtube but I like this one quite a bit.
So while I was sitting there with my glass of Port working through the Sonic Pi Tutorial I had the urge to gain back my text editing fluency I’m used to from my day job. Luckily Sonic Pi supports a lot of keyboard shortcuts however as of now they’re optimised for a keyboard layout of the english language.
After a bit of tinkering and the help of Karabiner-Elements I managed to re-map my line-commenting shortcut of CMD+Shift+7
(I’m working on a MacBook with a Swiss German keyboard) to the shortcut defined by Sonic Pi M-/
.
The configuration you’ll want is:
{
"title": "Remap CH comment combo to EN comment combo",
"rules": [
{
"description": "Remap CH comment combo to EN comment combo",
"manipulators": [
{
"conditions": [
{
"bundle_identifiers": [
"net\\.sonic\\-pi\\.app"
],
"type": "frontmost_application_if"
}
],
"from": {
"key_code": "7",
"modifiers": {
"mandatory": [
"left_command",
"left_shift"
]
}
},
"to": {
"key_code": "keypad_slash",
"modifiers": [
"left_command"
]
},
"type": "basic"
}
]
}
]
}
If you also got this need then following these steps should set you up.
~/.config/karabiner/assets/complex_modifications
remap_ch_comment_combo_to_en_comment_combo.json
and fill it with the configuration above.DATABASE_URL
environment variable.
First we have to change the currently used buildpack to the multi buildpack which makes it possible to run the Node.js besides the Elixir buildpack.
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
Then add the file .buildpacks
with the contents below which will pull in the buildpacks and run their compile script. The buildpack listed last will hereby be used to run the application.
# .buildpacks
https://github.com/heroku/heroku-buildpack-nodejs.git#34cffc9b6397bc1ce97a4b5e911fa771fc4e7907
https://github.com/HashNuke/heroku-buildpack-elixir.git#36f2ff22d0236589256d9044091b950b7cc565d2
Now that we have multiple buildpacks we need to tell the Node.js one to run the postinstall
hook after all the dependencies are installed. Just add the scripts
part to your package.json
and you’re all set for Heroku to run the brunch build
command.
# package.json
{
"dependencies": {
...
},
"engines": {
"node": "~ 0.12.1"
},
"scripts": {
"postinstall": "node_modules/.bin/brunch build"
}
}
When deploying the application to Heroku, the configuration variables will be exposed to the application via environment variables. For the database that means there will be a single String from which the username
, password
, host
, etc. will have to be extracted. You could theoretically do that by hand defining those variables individually, however what happens if your database provider has an issue and suddenly decides to change the value behind your environment variable? - in order to avoid that, just put the code below in your prod.secret.exs
file to help you split the configuration variable for the database.
# config/prod.secret.exs
defmodule Heroku do
def database_config(uri) do
parsed_uri = URI.parse(uri)
[username, password] = parsed_uri.userinfo
|> String.split(":")
[_, database] = parsed_uri.path
|> String.split("/")
[{:username, username},
{:password, password},
{:hostname, parsed_uri.host},
{:database, database},
{:port, parsed_uri.port},
{:adapter, Ecto.Adapters.Postgres}]
end
end
Then instead of defining the arguments for your app separately, call the database_config
function and you’re all set.
# config/prod.secret.exs
config :yourapplication, Yourapplication.Repo,
"DATABASE_URL"
|> System.get_env
|> Heroku.database_config
heroku run mix ecto.migrate
Since I’m pretty new to Elixir and even newer to the Phoenix Framework, this tutorial might lack some best practices, in which case please let me know so this post can be updated to reflect them.
]]>heroku create myapp
myapp
as the Name in the HerokuBeta settings view and leave Github api url empty.Then for the Heroku token, you’ll need your Heroku api token, which you get via heroku auth:token
(eg. token123) and your email address you’re using to login to Heroku (eg. hi@email.com). Then convert the two into a base64 hash for the Authorization header by issuing this command on a Unix system echo "hi@email.com:token123" | base64
(eg. base64-123).
curl -X POST https://api.heroku.com/oauth/authorizations \
-H "Accept: application/vnd.heroku+json; version=3" \
-H "Authorization: Basic base64-123" \
-H "Content-Type: application/json" \
-d "{\"description\":\"direct token description (preferably meaningful)\"}"
Personal NAS-es are quite handy, however their wide spread usage and the fact that people don’t often check their system via the web dashboard makes it a perfect target for crackers trying to extort you for money or just using your machine to mine bitcoins for them.
In this case I had a DS213j delivered to me with SynoLocker on it. A malicious piece of code that encrypts all your files and holds them hostage until you give in and pay them what they ask for. Please don’t ever give in. Just accept that your data is lost forever and you hopefully have a backup of it somewhere else, if not, now would be a good time to start thinking about one.
So on that basis, the fix is fairly trivial.
On a further note, since crackers were able to get into your NAS once, I’d ask yourself whether you really need external access to it and otherwise make sure there are no ports being forwarded by your router. Also I recommend changing your router password, especially in case it’s still the factory default one. If you really do need remote access, at least change the ports which are used externally, eg. map 3001 to 5000 internally.
Lastly, I’ve used automatic DNS updating services quite a bit too, however they could have been the enabling party for the attack. Once such a provider is compromised, crackers can check their attacks against all your ports which makes the previous advice in-effective. Since routers nowadays don’t change their ip addresses that much I usually look up my home address via the GMail login history and use the naked IP. Less convenient, but more secure.
Hope this short summary helped during your reset and it’s the last time something like that happened.
]]>Last sunday I enjoyed watching all four Bourne movies including the newest one. Additionally I like the TV series Homeland and Person of Interest. However what is shocking to me, is that while those stories are just made up by writers, situations like those on TV actually occur in real life to people like you and me (well sort of). Algorithms decide whom to kill and drone pilots carry out the strikes like robots. Without formal charges. Without asking questions.
If you have two hours to spare I encourage you to watch the full length track of the #29c3 session entitled Enemies of the State in which the three Whistleblowers Jesselyn Radack (former ethics advisor to the Department of Justice), Thomas Drake (former senior executive of the NSA) and William Binney (former senior technical leader of the NSA) talk about what the government did to them while they played by the rules. Quite an eye opener.
]]>During the last couple of weeks many have announced that they are moving away from Google Feedburner. The reason for this move, the Feedburner API will be shut down in October of this year, probably leading to a similar fate for the feed proxy parts serving this feed.
That means, if you are reading this and would like to do so in a couple of months then I encourage you to update the URL in your feed reader with http://philippkueng.ch/atom.xml.
Thanks guys, for being such great readers.
]]>Ein Artikel der Sonntagszeitung zum Projekt Trainshare hat am Sonntagmorgen einen doch beträchtlichen Twitter-”Shitstorm” gegen die SBB ausgelöst. Als Beteiligter am Projekt möchte ich gerne schildern, wie es überhaupt so weit kam.
Der Hauptgrund für die Situation ist höchstwahrscheinlich ein Missverständnis am 30. März 2012: An diesem Datum fand in Zürich ein Open Data Hackday statt, ein Anlass an dem Programmierer, Designer und andere mit offenen Daten an tollen neuen Anwendungen für die Öffentlichkeit arbeiteten.
An diesem Event, nach einem vorhergehenden Blogeintrag, riefen Alain, Adrian und ich die Trainshare App ins Leben. Nachdem wir einen halben Tag daran gearbeitet hatten, setzten sich Bruno Spicher und Jean-Philippe Picard von den SBB zu uns, um mit uns zu diskutieren, worum es bei der Trainshare App geht. Ich erklärte ihnen ganz offen, was der Plan war und was unsere Beweggründe waren. Anschliessend liess uns Bruno Spicher wissen, dass sie “an etwas Ähnlichem” bauen, das im Herbst 2012 veröffentlicht werden solle. Diese neue SBB App werde gleich funktionieren wie die bestehende Gleis7 App, zusätzlich aber “CheckIns” erlauben. Tiefe “social media” Integration von Twitter und Facebook wurden weder im Gespräch erwähnt, noch gibt es derlei in der Gleis7 App.
Aufgrund der obigen Informationen beschlossen wir, die Trainshare App weiter zu entwickeln, und den Pendlern als kostenloses, werbefreies “social travel” Experiment anzubieten. Die anfallenden Daten wollten wir aufbereiten und visualisieren, sodass den SBB indirekt ebenfalls ein Nutzen erwächst – neben zufriedeneren Kunden vielleicht gar mehr Fahrten.
Über den genauen Inhalt der damaligen Gespräche besteht heute Uneinigkeit, rekonstruieren lässt sich die Sache kaum. Was ich aber tun kann, ist, die Geschehnisse nach dem Hackathon im März bis zum Sonntagszeitungs-Artikel am vergangenen Sonntag chronologisch aufzuführen – in der Hoffnung, dass sich daraus vielleicht etwas lernen lässt.
30. März 2012 - Hackathon
Nachdem wir uns unterhalten hatten, schrieb Bruno Spicher wohlwollend auf Twitter über @trainshare und erwähnte dabei Michael Rüetschli, den Projektleiter von @SBBConnect:
Interessante #trainsharingApp Idee von @philippkueng philippkueng.ch/trainsharingap… #makeopendata /cc @rueetschli
— Spicher Bruno (@brunospicher) March 30, 2012
1. April 2012
Da wir unser Projekt während dem Hackathon nicht beenden konnten, schalteten wir kurz danach auf trainshare.ch eine E-Mail Sammelseite auf, die bis heute 200 interessierte Nutzer umfasst.
11. April 2012
Die Swisscom berichtet auf ihrem Blog über das @trainshare Projekt. Dies provoziert wiederum 18 zusätzliche Anmeldungen, was zeigt, dass wir eine Nische entdeckt hatten.
14. Mai 2012
Erste Anzeichen von @SBBConnect gesehen durch einen Retweet von @rueetschli. Nebenbei, wir beide folgen uns schon seit ca. einem Jahr gegenseitig auf Twitter:
SBB.Connect coming this autumn #SBB
— SBB.Connect (@SBBConnect) May 14, 2012
18. Mai 2012
@SBBConnect ist ab jetzt nicht mehr nur ein Konzept, sondern wird gebaut. Hab mir damals nichts weiter dabei gedacht, da gemäss unserem Stand des Wissens @SBBConnect der Gleis7 App ähnlich sein wird, und wir mit der Twitter- und Facebook-Integration von @trainshare ein ganz anderes Publikum ansprechen werden:
Soeben die letzte Anforderung spezifiziert. Damit endet genau jetzt die 4-monatige Konzeptionsphase.#SBB #SBBConnect
— SBB.Connect (@SBBConnect) May 18, 2012
28. Juni 2012
Hatte die Ehre und präsentierte @trainshare an der OpenData Konferenz 2012. Das Medienecho war gross und somit sollten auch bislang OpenData-fremde SBB-Mitarbeitende von der App Wind bekommen haben.
Ein “Hallo, wir machen exakt dasselbe. Ihr könnt euch die Zeit also sparen” oder “Hi, wir finden es toll, lasst uns das zusammen bauen” oder jegliche andere Signale seitens der Bahn bleiben aus.
19. Juli 2012
Danilo, ebenfalls ein OpenData Hackathon Teilnehmer wies mich darauf hin, dass die @trainshare App Konkurrenz bekommt durch @SBBConnect. Als ich daraufhin den Artikel der Bernerzeitung las und sah, dass die App der SBB ebenfalls eine Integration der Sozialen Plattformen anbieten wird, bekam ich einen Schreck, da es doch hiess, sie soll wie die Gleis7 App werden:
@philippkueng kriegt trainshare konkurrenz? :) twitter.com/rueetschli/sta… zeitlich könntest du sie allerdings schlagen!
— Danilo ([@dbrgn](https://twitter.com/dbrgn)) July 19, 2012
Darauf kontaktierte ich die Team-Mitglieder Adrian und Alain und fragte sie, wie sie zu einer Beendigung unseres Projekts stehen. Dies, weil wir in einer Konkurrenzsituation die SBB niemals werbetechnisch hätten überbieten können und es keinen Sinn macht, eine Plattform für bloss ein paar Nutzer zu unterhalten. Zudem konnte es aus meiner Sicht nicht darum gehen, sich gegenseitig zu bekriegen, sondern darum, der reisenden Community einen Nutzen zu liefern.
20. Juli 2012
Da ich noch nicht von allen Team-Mitgliedern eine Antwort hatte und selbst unschlüssig war, wandte ich mich an den Opendata.ch-Vorstand, mit bitte um Rat. In der Diskussion wurde immer klarer, dass eine harte Konkurrenzsituation kaum etwas bringen kann– wie einige Jahre zuvor die App GottaGo gezeigt hatte.
23. Juli 2012
Das Trainshare-Team fällte – auch aufgrund des Falles von GottaGo – den Entscheid, das Projekt stillzulegen. Aber auch es wieder aufzugreifen, sollte die SBB App nicht machen, was wir uns wünschen. Zudem beschlossen wir, uns den SBB zur Verfügung zu stellen und nach Möglichkeit mit unserem Fan-Feedback mitzuhelfen, dass die App das wird, was sie sein könnte.
25. Juli 2012
Am Abend telefonierte ich mit dem Journalisten der Sonntagszeitung, und schilderte ihm meine Sicht der Dinge. Dabei ging ich noch immer davon aus, dass uns im März nicht kommuniziert worden war, dass die @SBBConnect App “social media” Integration haben würde.
26. Juli 2012
Der Journalist kontaktierte anschliessend auch Bruno Spicher von den SBB, um dessen Sicht der Dinge einzuholen. Da alles mit dem Gespräch im März begann, trug mir Barnaby Skinner danach die Worte von Bruno Spicher vor, um allfällige Diskrepanzen mit meinen Äusserungen zu finden. Dabei hiess es, Bruno Spicher habe mir gesagt, die SBBConnect App werde ähnlich der Gleis7 App werden, CheckIn Funktionalität sowie Social Media Integration bieten.
Meinerseits erinnere ich mich lediglich daran, dass es eine Gleis7 ähnliche App mit CheckIn Funktionalität und Coupons werden solle. Nachvollziehen lässt sich das nun kaum mehr.
Sicher hätte man vieles anders machen können. Aber das hilft dem Projekt nun nur noch wenig, entsprechend können wir nur nach vorne schauen. Ich persönlich hoffe, dass die SBB Führung die negative Stimmung im Nachgang zum Sonntagszeitungsartikel nicht als Normalfall in der Arbeit mit offenen Daten ansieht. Ich würde mir wünschen, dass sich daraus Lehren ziehen liessen für mehr Offenheit, für mehr Plattformdenken, wie etwa bei den Verkehrsbetriebe von San Francisco.
In diesem Sinne bitte ich euch Blogger und Twitterer alle, mit dem Shitstorm gegen die SBB aufzuhören. Und die SBB bitte ich ihr Bestes zu geben, um die optimale Version der @trainshare Idee umzusetzen, eine echt innovative “social travel” App!
Zudem nehme ich die öffentliche Einladung von @rueetschli, dem Projektleiter von @SBBConnect, gerne an und komme ihn in Bern besuchen:
sonntagszeitung.ch/multimedia/art…Biete dem @philippkueng hiermit gerne an,uns in Bern zu besuchen,damit er weiter an @trainshare arbeiten kann @BarJack
— Michael Rueetschli (@rueetschli) July 29, 2012
Bezüglich den Lehren und einer Interpretation der Geschehnisse empfehle ich den Artikel von Hannes Gassert auf dem OpenData.ch Blog.
]]>Last week I was priviledged to talk about @trainshare at the second OpenData conference in Switzerland. I was truly honored, and therefore nervous, to present after great speakers like @hannesgassert and @rufuspollock.
Basically I wanted to give the attending journalists and politicians an overview of what it means to develop a project based on OpenData or for that matter OpenGovernmentData. That getting Data for free does not mean the service has to be free too.
For those who have not been able to attend, or just want to glance through the talk again, @AdrianKuendig (working on the Windows Phone App for @trainshare) was kind enough to record it.
Thanks for the amazing opportunity @BarJack, @hannesgassert, @andreasamsler, @loleg, @ecolix and the rest of the OpenData.ch team.
]]>It has been quite a while since #MakeOpenData took place, the event where trainshare transformed from being purely an idea to something more tangible.
Since @AdrianKuendig, @visualcontext, @koma5 and I were not able to deliver a running prototype until the end of the event, we continued working on the App as well as the API whenever we had some spare time. While it is neither done nor available in the AppStore yet, we just wanted to give you a sneak peak of what is to come.
As you can see, from the video above, there are still things need to be figured out or refined. While we might think of some ourselves, we always welcome your suggestions, so do not hesitate.
Well that is it for this time around, thanks to my trainshare team members, the awesome guys behind #MakeOpenData who made this event possible, journalists and bloggers who wrote articles like this, this and this. You rock!
]]>Abstract - It’s a Javascript wrapper library written for Node.js which makes the usage of websockets a lot easier. It also takes care of supporting older browsers with multiple fallbacks. By using socket.io, creating “realtime” applications is a matter of minutes.
Abstract - Meteor is an opinionated toolset of common Node.js npm packets and some custom code. Among many things it enforces the use of Fibers, Socket.io and MongoDB both on the server aswell as on the client side.
Here is some explanation on what I said during the talk for those who were not present. I am well aware that I did not sell Meteor as the next big project that everyone has to pick up. The reason is simply that it is really in its early stage of development. Additionally I think it is rather dangerous to sell Meteor as a framework for beginners, since its magic bits hide a huge amount of logic a beginner normally does not need to use in order to build a hello world application. By embracing Meteor the novice programmer does not learn anything about those hidden areas which probably lead to unexpected results here and there. After all distributed systems are hard to build.
An example for such a use-case is the todos application which comes bundled with meteor. It is nicely done and quite easy to understand, but what happens for instance if 2 users, with a high latency connection to the server, update the same entry at about the same time? Your changes will be reflected immediately on your machine, with the syncing following a few seconds later, and a few seconds later from there you might be surprised to see that your original changes have been written over by the second user. With a traditional stack you would do some locking or at least insert versioned entries, but with MongoDB this is not supported by default. You would be able to implement versioning yourself by using timestamps and inserts for every update but why the additional overhead?
Just to state it again, the words above do reflect my personal experience when playing with Meteor 2 weeks after its initial appearance in April 2012, and I will be happy to give it another try once it has somewhat matured.
As for the @JSZurich event, thanks to @ikr for hosting us and @seldaek for organizing it.
]]>My goal is to realize the backend as well as various mobile clients with the help of other hackers during the make.opendata.ch hackathon to show what can be done by leveraging publicly accessible data. And just for the fun of jumping on the buzzword bandwagon, it will be SoLoMo (social-local-mobile) ;-).
Day in and day out I am commuting back and forth from where I live to Zürich. In order not to see those hours spent commuting as time wasted, you will have to spend it wisely which often times means doing some kind of work. For a programmer that is a piece of cake ;-). However there are times you just want to talk to someone remotely familiar. You could obviously do some kind of creepy stalking and wait outside the train until the very last minute before departure to see if you see someone you know. Chances are you will either find no one or they actually want/have to do work instead of chat with you. So status quo sucks, right? Well could we improve it?
Obviously yes. Let us create a mobile app that allows to checkin to a train and then find conversation-partner-matches via Twitter, Facebook and maybe even Foursquare.
When starting the app for the first time you are using one of those buttons on the login screen to connect to your favourite network and start using the TrainsharingApp.
The subsequent window you will get is the home screen on which you can enter your commuting route. This will make a call to sbb.ch to get the timetable and save it locally for later reuse. If you have already used this route there is a quick dial button to select your route and then select the departure time on the following view.
As soon as you have chosen a time, you are automatically checked in to that train-route which will trigger a push notification to users on the same train or users who will share part of your route and with whom you are friends on one of the social networks offered at login. After sending those notifications to friends or receiving one you are able to initiate a Meetup by selecting their name, selecting optional information and tipping the “Meetup” button.
As for the backend, there exists an already scraped static dataset of train lines with their corresponding numbers, eg. S8 18898 and each route (Station to Station, see below). All in all about 230’000 routes which will reside inside a MySQL DB. In case you want to play around with the dataset, here is a routes table dump.
id | linename | dep_station | dep_time | arr_station | arr_time |
---|---|---|---|---|---|
1 | S2725920 | Waldshut | 06:24 | Koblenz | 06:29 |
2 | S2725920 | Koblenz | 06:44 | Klingnau | 06:47 |
For matching friends there will be some kind of NoSQL DB instance since MySQL is not that good at it without creating a lot of table and row overhead. Still needs to be figured out, suggestions are welcome.
Additionally there will be a join table (routes_users, see below) for matching the routes with users. Since the routes_users table can fill up quite fast and past data does not need to be held warm, it will be pushed to S3 for later analysis and then cleared every 24 hours.
id | routes_id | users_id | start_time |
---|---|---|---|
1 | 423 | 234412345 | 12:03 |
2 | 5122 | 967512345 | 14:34 |
So much for the storage part, now over to the API. There will be three endpoints at least, /login
, /checkin
and /read
.
/login will be used to send the social network authentication tokens over to the server to allow us to do the bandwith heavy parts. The request should be one of type POST with the credentials in its body, which will give a trainsharingID back as a response. This trainsharingID will then be stored on the client device and used for every single request as a querystring.
Key | Value |
---|---|
network | facebook, twitter or foursquare |
token | network_token |
token_secret | network_token_secret |
/checkin is, as the name suggests, the endpoint for when a user wants to check in to a train ride. For clarification, a train ride consists of multiple routes (Station-to-Station) and may also involve switching trains. Requests to /checkin
should also be of type POST with the trainsharingID in the URL as follows /checkin?trainsharingID=your_trainsharing_id
. As for the POST body, it is the information getting delivered when clicking on the details section for the specific connection timetable on sbb.ch.
Key | Value | Key | Value | |
---|---|---|---|---|
dep_st-1 | Siebnen-Wangen | dep_st-2 | Pfäffikon SZ | |
dep_t-1 | 06:03 | dep_t-2 | 06:19 | |
arr_st-1 | Pfäffikon SZ | arr_st-2 | Zürich HB | |
arr_t-1 | 06:13 | arr_t-2 | 06:48 | |
train_id-1 | S2 18220 | train_id-2 | IR 1754 |
/read will be the only endpoint accessible via a GET request, though it still needs the trainsharingID in the querystring. /read
is for manually asking if new overlaps have been found since checking in. The goal however should be that this endpoint is only used for development and new overlaps are sent to the user via PUSH notifications in a production setup.
Enough with the server stuff, what happens on the mobile? -Well, upon first launch of the app the user signs in with one of those social networks which will create an OAuth token and an OAuth token secret which will be sent to the /login
endpoint mentioned above. The trainsharingID in the response then gets stored persistently in the application storage.
Next up is the home screen where the user can enter departure and arrival destination to look for available trains. This will trigger a POST request to sbb.ch from the phone, parsing the response and showing the available options to the user. If a user selects a specific time another GET request will be fired to fetch the details information for that connection whose response gets parsed and sent to the /checkin
endpoint.
If matches already exist, the response to the /checkin
POST request will contain users from various networks who are matching the friends criteria. In case none exist yet the response will be empty. In both cases the user will be notified via PUSH if additional matches are made later on.
Both a proper UI and API endpoints for the meetup functionality still need to be figured out. Maybe it even makes more sense to use their native networks eg. Facebook or Twitter for messaging instead of building yet another Whatsapp clone.
Well, that is all there is to say about the TrainsharingApp at the moment. Have I missed something? Or do you have a specific suggestion concerning changes, additions, or removal of features?
In case you are interested in creating a native mobile client besides the one for WindowsPhone, which is already being covered by Adrian, let me know either via twitter or the comments below so I can get in touch with you. The same goes for designers too.
]]>Below are the recordings of last week’s joint event by the IT geeks Zurich and GTUG Zurich. At this event Luuk van Dijk a staff software engineer at Google who’s currently working on the Go compiler and Johan Euphrosine a Developer Programs Engineer working on App Engine offered some insights on how Go can be leveraged and what it does internally to do so.
Also many thanks to the organizer Muharem Hrnjadovic for making this second amazing IT geeks event possible.
And without further ado, enjoy the talks below and excuse the Pizza distribution dilemma we luckily had to deal with.
]]>OpenData, BigData, Infographics, Visualizations and Data journalism are all buzz words and movements which started to get quite some traction lately.
Whilst there are lively ecosystems of blogs around niche topics like visualizations, processing or data journalism there is not enough interdisciplinary communication going on in my opinion.
Well, how about creating it by interviewing those experts, giving them some airtime and maybe even connect previously unknown scientists, journalists, hackers, politicians and ideators to each other.
Since I am still toying with the idea there is nothing fixed just yet. The language might be English or German depending on who is in front of the camera. Also if you are interested in being a host or can suggest a person, let me know.
Would you like to listen to or watch it?
]]>John Perry:
One needs to be able to recognize and commit oneself to tasks with inflated importance and unreal deadlines, while making oneself feel that they are important and urgent.
Being a heavy procrastinator myself I agree with John Perry but I’d like to extend one point. While some work can be encouraging even slightly more can have devastating effects. From my personal experience a task overflow can turn everything into white noise at which stage you’re not caring about any of it anymore.
Positive side effect, you’re able to work off those items quite relaxed. On the other side you might also over-commit to incoming work because estimating white noise is kind of difficult.
On another note, have a look at the awesome copy in the article footer. Brutally honest.
]]>Yesterday, on january 19th, Swiss OpenData enthusiasts and activists have founded the OpenData.ch Association in Bern. It’s goal is to bring together citizens, journalists, designers and developers to realize ideas based on publicly available OpenData and OpenGovernmentData.
More about the goals and mindset of OpenData.ch can be found in the German-Only - Open Government Data for Switzerland Manifesto.
If you’re up to create something with OD, whether your a developer or not, reserve march 30th and 31st when the next Make.opendata.ch-Hackathon will be held. Need, some inspiration of what the first one was like? Checkout the great summary by datavisualization.ch or read my hackathon review.
Thrilled to start this new chapter with an amazingly diversified group of people.
]]>James Hague:
Be widely read. There are endless books about architecture, books by naturalists, both classic and popular modern novels, and most of them have absolutely nothing to do with computers or programming or science fiction.
Seems funny now, but had the most difficult time to let go of all those other things when started studying. No more philosophy or politics just algorithm runtimes and graph theory.
via the codeproject newsletter
]]>Richard Minerich:
We work in an environment where hearsay and taste drive change instead of studies and models.
While VCs and influencers encourage us to jump on the emotional UX and viral social-network train to make our ideas succeed, we tend to not consult our logs first. After all, tapping in the dark is not science, but that’s sort of another topic.
Richard Minerich writes about the shift in programming languages away from proven ones to scripting languages. I tend to agree that dynamic languages should mainly be used as glue. Building a house out of porous cardboard could work if you’re an experienced professional, but it’ll probably fail for most of us. That said most of my code to date is dynamically typed because it’s just way to comfortable.
However there’s no test that’ll cover every single error case on the other side there always will be static and correct formulas.
]]>Maciej Ceglowski:
I love free software and could not have built my site without it. But free web services are not like free software.
The reason I’m writing this is that while premium services are making money they’re not necessarily attracting enough users to actually accomplish something while at the same time, free, VC-backed startups are doing exactly that. The middle way is to design a so called freemium service where premium users have to pay for free ones, but they’re obviously not going to tell them that.
Now after having migrated this blog over to a static version I needed a replacement to enable visitors to send me e-mail while at the same time not opening the doors for spammers. PHP scripts can easily fulfill that job but I want something else.
While having used the free version of Wufoo in the past I thought it’s a no brainer to go back and leverage it again. Obviously paying for it this time. Then it hit me while checking the pricing page. The cheapest subscription is 15 dollars per month, while free plans are displaying ads to your visitor. What are they thinking! Paying 15 dollars, which is more than I pay for hosting, while only receiving about two messages during that time period.
That said, please startups and SaaS companies, remove the free model and make premium reasonably priced.
As for Wufoo, i’d guess replacing the free plan with one where you’d pay 2 dollars a month would make them more profit than showing ads on those confirmation pages.
When talking about showing ads to free users checkout the tweet by @romeroabelleira and give it some thought (translated):
Dear Advertisers on Spotify, I don’t even pay for Spotify, I’m therefore worth nothing to you too. Sorry, Juan
And don’t waste your time looking for the contact form, I’d just put the e-mail address into the footer for now.
]]>Dslrnewsshooter video: Nikon D4 - video feature run through from Dan Chung on Vimeo.
The most innovative feature in my perspective however is the Ethernet connection and what comes with it, a camera management console completely built in HTML and JS so one’s finally able to leave all the proprietary, heavily bloated, vendor specific crap software behind and focus on realizing ideas.
Checkout the video WHY by Corey Rich below if you want to see what’s possible with a D4 presumed you know what you’re doing.
WHY - Nikon D4 Release Video from Corey Rich on Vimeo.
]]>A great read if you’re working on the next big thing yourself, because you’ll might need a team to work on it later.
Once you are no longer alone working on your project, the code you have written sets an example. - Philip Hofstetter
Also, I think it’s totally fine to hack together an initial version, hackathon style, you can improve later on however the structure, needs to be reasonably stable.
]]>