Chaosradio documentation video

Another german tidbit: there is now a short documentation video (MPEG-1, 70 MB) available describing the activities of Chaosradio, the radio project of Chaos Computer Club Berlin. It provides some interviews with some people involved (including myself).

Chaosradio has been a quite successful project so far. We managed to produce a single show almost every month since 1995. It is a three-hour talk radio show airing through Fritz, the best known radio station for young people in the Berlin/Brandenburg area.

A really entertaining introduction to the Semantic Web



At Reboot – the „annual meetingplace for the digital community in denmark“ – Ben Hammersley did a entertaining to talk on the Semantic Web. Best of all, there is a video recording of the talk and Ben has put up the slides as well giving you the chance to retroactively experience his 30 minutes talk. The are more videos available that might be worth checking out but I haven‘t seen them all yet.

Interestingly, he mentions Mac OS X „produces enormous amounts of RDF data in the background“. I have no clue what could be meant by that but I would be more than happy if somebody could shed some light on this.

Echoes on the web

Hmm. I am comparably new to the weblogging phenomenon although I have been busy doing webloggish things for many years now. The discovery of the basics of the Semantic Web, the power of of syndication formats and personal publishing combined and the ever-growing trend toward intelligent clients using web services via http using well-defined protocols made sense to me.

Development of RSS has been bumpy: after Netscape has started with version 0.9 (already based on the idea of RDF) but later stopped to pursue their plans. RSS 0.91 is a rewrite of the format but no longer based on RDF. It is pretty rudimental and well-supported. Dave Winer lead the development of more incremental versions (0.92, 0.93).

Then the fighting began. Some bright heads took up on the original idea of designing news channel around RDF and developed RSS 1.0. I considered (and still consider) this being a very smart move. Dave Winer got angry and put out RSS 2.0 which was based on the 0.9x stuff and had nothing to do with the 1.0 idea of re-integrating RSS in RDF.

So here we are now: it is a mess. To confront the mess, the RSS people are somehow gathering around a new effort to create something new called „Echo“. It should clear up the clouds, make everybody happy and create a garden of happy bloggers where food flies through the air.

Well, I don‘t buy it. Echo is more a successor to RSS 2.0 than RSS 1.0 and therefore it is not based on RDF. I still think, RDF is the right path to follow as it makes integration with all the other RDF stuff so easy. There is a post by Dan Brickley on the www-rdf-interest mailing list that I do agree with.

Ben of Six Apart (who is producing Movable Type) is talking about his motives to support Echo. Maybe I am a bit naive, but I don‘t see big problems here. Let‘s recapture what his points are:

  1. The RSS spec does not say how to encode content. Well, using RDF this is just a matter of de-facto standardization. I think any kind of public agreement on RSS in RDF could do that. Then take XHTML and you are done.
  2. XML-RPC is severely lacking in internationalization (I18N) support . Yes, it doesn‘t. But what does this have to do with RSS? Take SOAP and the problem is solved.
  3. Content is represented differently in an API than it is in a syndicated feed . Could be solved by moving to SOAP as well.
  4. Confusion over elements. Well, of couse. There is a lot of confusion. But clarifying meaning does not necessarily mean to have to design a completely new format. RSS/RDF 1.1 could be enough in my eyes to clarify and probably add new elements to represent new and old meaning.
  5. No universally-supported and -defined extensions. Again, this could be the topic of and additional document focussing on the meaning or call it RSS 1.1.

However, there seems to be strong support for the Echo Project and I am curious how the Wiki-based development process will turn out in the end.

Die Bayrische Hackerpost

Die Bayrische HackerpostAfter a long time, an archive of scanned copies of the german magazine „Die Bayrische Hackerpost“ („Bavarian Hacker Post“) has been posted to the web. You find it here.

Die Bayrische Hackerpost (BHP) was a one of the first hacker publications in Germany. Beginning in 1984 – the same year the Chaos Computer Club launched its magazine „Die Datenschleuder„) – a bunch of Bavarian hackers published reports on technology, the hacking scene and hacker culture. The BHP was considered to be part of the German hacker family as everybody new each other as the whole scene wasn‘t that big anyway.

The tagline of BHP read „Das Informationsblatt für den lebensbejahenden DFÜ-Benutzer“ which means something like „the information gazette for users of remote data transmission with a positive approach to life“. Sounds a bit awkward, but the term DFÜ was actually very popular back then and described all kinds of hacking activity related to BBS and the Arpanet. Funny enough, Microsoft is still using this antiquated term in their german localization for Windows.

GeoURL makes the web a bit more semantic

ICBM addresses in EuropeMaking the web a bit more semantic, the GeoURL ICBM Address Server is a fine concept for mapping web sites to – you guess it – locations. So you can attach your home page to the place where you live or whichever place you think might be most relevant. In order to add your server to the database, you have to add some meta elements (vernac. „tags“) to your HTML.

There are two styles to choose from. First there is GeoURL‘s own ICBM meta tag or you could use the geo.position element of the GeoTags family of meta keywords alternatively. The latter is a bit more descriptive as it allows for additional field. It is equally supported by GeoURL, but the ICBM entry is sort of cool as it dates back to the good old days of Usenet.

it‘s a bit tricky to get you own location if you don‘t have a GPS receiver. There is list of helpful resources at GeoURL to find your location. While there are a lot of web sites covering North America, there is much less to find supporting Europe. I found WehereOnEarth to be sufficient for my needs.

So in the end I enriched this weblog with spatial information by adding the following fields to the HTML header:

<meta name="icbm" content="52.52207, 13.38274">
<meta name="geo.position" content="52.52207;13.38274">
<meta name="geo.placename" content="Berlin, Berlin, Germany, Europe">
<meta name="geo.region" content="DE-BE">

Once you have done that, you can „ping“ your site to GeoURL. The GeoURL server reads the latitude/longitude information from either element and stores it in its database. You can then look up other sites that have specified locations in your area. Here is my neighbourhood. Seems as if I got the most central blog site in Berlin.

Even cooler, this list can be returned in RSS (and other useful formats) so you can track changes to the list of your „neighbour sites“ using your RSS reader.

WebKit becomes accepted

Close after Safari 1.0 has seen the light of day the new WebCore framework (or WebKit) that is part of the release independent developers embrace the new tool to replace the old HTML renderers. It comes as no surprise that two weblog utilities are the among the first to push out new versions recompiled for the new framework.

Shrook was released as a „Technology Preview“ a couple of days ago and does its HTML preview pane now with WebKit. And today, Kung-Log also advanced one number behind the point to embrace WebKit for its preview pane.

Always a bit ahead of the pack, the OmniGroup had released a beta version of their alternative browser OmniWeb that also uses the new rendering toolkit.

The good thing is: more and more applications are about to go one step further and make use of new web standards.

Mac OS X Panther: A Bug Fix

After having had a first look at Panther‘s new features I can‘t get rid of the feeling that Apple is about to launch Mac OS 9.3 instead of 10.3. Why?

First, the overall look of the new UI is closing the gap to X‘s ancestor: the window title become solid grey, the menus regain the old school divider line and the Finder reintroduces color labels for files. Hooray. Have I waited for this for a long time? No. Well, I agree that this face-lifting is the right thing to do as it is obvious that Apple was mislead when going for the greyish striped look in critical user interface elements. However, this can be considered a bug fix, not a feature.

Then there is the „all-new“ Finder. It is actually only slightly changed. It makes use of the obscure „Network“ entry point for the first time. The technology behind is old: automount. This has been used by OS X for quite a while now (for mounting home directories from OS X Server). So for the first time we can use something in a way it was conceived from the beginning.

The Open/Save file dialog has been changed once again. This time it seems as if Apple has finally understood: it is the same interface like the finder. But halt, so far there is only the column and list view. No icon view. Why? And the list view does not feature triangles. Why? Argh. Apple, please. Add the icon view, get it finally right and please don‘t call it a new feature. Another bug fix.

Ah yes, and then there is Exposé. I admit it might be useful although it tries to steal another three function keys from me (but this can be changed). But Exposé is not really a revolution as it is just a workaround for the basic shortcomings of window-based user interface albeit a welcome one.

Fast User Switching: we have seen this on Windows. Bug fix. Faxing? Windows users do it for years now. FontBook? It‘s about time. FileVault? Still no details but crypto file systems are nothing new for UNIX users. Fast PDF rendering? I wondered why it was slow in the first place. Faster Mail.app with thread support? As long as it also stops crashing all the time, it is appreciated.

So far, I find nothing special that might be worth paying for. Okay, there are some other improvements in the Windows sharing and VPN area, there is explicit support for IPsec and so on. But this could be added to Jaguar as well. This is not a point release.

SFTP and SSH tools on Mac OS X

Transmit LogoThe FTP protocol has been around since the of the Internet. Together with the TELNET protocol it formed the basis of interaction on the net these days. Today, FTP is still in wide use as so many people are used to the protocol and there are so many clients and servers available and installed.

But FTP lacks a certain feature: security. The passwords used are transmitted unencrypted and therefore FTP should be avoided instead for public servers with anonymous access enabled. But there is help: the SFTP protocol is a FTP-like protocol run via SSH (Secure Shell) that can be considered „secure enough“ these days. With Mac OS X, ssh use is becoming ubiquitous as the SSH server is not only preinstalled but can be switched on with a single click on the Sharig preference pane. With SSH enabled, you gain SFTP access immediately. The only thing you need is a proper SFTP client.

Fugu LogoThere are two excellent contenders in this area: Transmit and Fugu. Eventually, both programs received honors at WWDC this week. Fugu won the „Best Mac OS X Use of Open Source“ award and Transmit scored second place for „Best Mac OS X User Experience“. I can basically agree.

Transmit is a really, really fast program. Use it for FTP transfer and it rocks. But what‘s best is there is seamless support for SFTP as well. It might be your first choice for general SFTP but it is non-free and it lacks an important feature: support for public key authentication and SSH agents. Fugu on the other hand is a dedicated SFTP/SCP/SSH client (so it has no FTP support at all) and has a comparably easy interface. And it‘s free. And, best of all, it does support public key authentication and SSH agents. What is an SSH agent, you ask?

SSH Agent LogoA SSH agent is the „keychain“ for ssh. It stores secret keys and allows repeated access to them for multiple ssh sessions. As SSH is a command line UNIX application, integration with the Mac OS X Keychain is not that easy. But there is another helpful tool called SSH Agent actually does this integration and makes working with SSH on Mac OS X a breeze.

When you install it, you can store your SSH passphrases in your keychain. Okay, you might consider this a security risk as the login password might be a bit easier to guess and opens up access to a multitude of accounts that are actually key based. But you are choosing your passwords carefully and change them regularly, don‘t you? So there is not a problem (except that nobody knows which encryption Apple chooses for the keychain, but I guess it‘s AES). But once you have installed SSH Agent you know what you have been missing.

The Mac is finally becoming a viable platform for UNIX system administration. And this is good.

Troubleshooting iChat AV

iChat AVApple‘s iChat AV right now seems to be the number one toy in the Mac community. I already went throught a couple of tests and it is obvious there are some problems with connecting from behind a NAT router. But in general it is possible, but it seem to depend heavily on the NAT being used.

I am in fact behind two (!) cascaded NAT routers before my packets leave off to the Internet. The first one is a Linux box, the second one is running NetBSD. But it works in both video and audio mode with all machines with a public IP address and even other computers behind yet another NAT router. So let‘s dig up some dirt. How do they do it?

Apple‘s own documentation is quite sparse on this topic. There is a TechNote explaining the ports that need to be open behind firewalls. But this does not explain how it works. So I was digging deeper and I discovered a page on NAT checking by Bryan Ford. He actually has prepared a Internet-Draft on this topic.

He explains a model for doing UDP to UDP communication behind NAT using a third computer telling each of the peers about the IP addresses that are actually used when sending out UDP packets. I don‘t know if this is the method Apple uses but they have both the central computer (the AIM system) and are in fact using UDP to communicate. And I don‘t see any other chance to do this anyway.

He provides a small NATCHECK program with precompiled versions for Linux and FreeBSD. The source does not compile out of the box on Mac OS X, but I patched it to make it work. It is just a single line of code that was missing, so there is no big deal here. You find the compiled program and the patched source code in this disk image (if you are not technically inclined: download the image, wait for it to open, then open „Terminal“ and drag the „natcheck-darwin“ file to the window and hit return).

The program detects if your router is suited to peer-to-peer communication or not. For my setup it reports:

RESULTS:
Address translation:           NAPT (Network Address and Port Translation)
Consistent translation:        YES (GOOD for peer-to-peer)
Unsolicited messages filtered: YES (GOOD for security)

The important point seems to be to have NAPT and consistent translation. Routers that had a NO on consistent translation were not able to communicate with me so far.

I am still not sure if this is the key to solve the iChat problems, feel free to comment on this issue here. I‘ll keep you updated on my progress.

UPDATE: There is a thread on Mac OS X Hints covering the same topic. On first look no real news however.

UPDATE: For people using the AirPort Extreme base station via DSL (PPP over Ethernet) the V5.1 firmware update (also included in the 3.1 update for the whole AirPort software suite) improves the situation.

UPDATE: I have compiled natcheck with the „verbose“ flag set so it reports the IP address and port number that is detected by the outside host. natcheck itself always uses port 9857 and connects to port 9856.