The problem with delicious

I recently heard (well, read on a listserv) from a couple of folks I know that were looking for an alternative to Delicious.  These people have a lot invested in their bookmarks but are finding it difficult to re-find them for various reasons and have therefore decided to move on.

I too have been having that problem as of late.  Perhaps I am not careful enough to put every tag in that I should for every entry, perhaps I am not consistent enough in my tagging and so forth.  While I could blame myself, it seems there are a couple of things that could be done to help me out, for instance, if I bookmark something, perhaps allow me to search not only the tags I have entered but also the top tags as many people do a better job of tagging than one.  That’s the power of the crowd, no?

I know, I know, Delicious gives me the opportunity to use the top tags when bookmarking.  Unfortunately, the main way that Delicious has been failing me is not recognizing when I am linking to something that already exists because the URL is slightly different therefore not giving me the opportunity to use those tags.

I spend a lot of time reading articles in the times, some of them directly from the site, some of them via RSS and many of them via email (from various news alerts that I have setup).  Each of these methods, visiting the same story on the times site yields a slightly different URL ending:






Here is the main URL for an article which is the base but could contain any of the above at the end:

It seems that I generally come upon NY Times articles in a different way than most other Delicious users as they never seem to be previously bookmarked.  Strange..  

Doing a tag search on Delicious for something obvious in the arlticle: “nytimes” and “seeclickfix” illustrates the problem:

There are at least 11 different entries with slightly different URLs..

Since mine will be the 12th version, I won’t have the benefit of having any tag suggestions from previous bookmarkers. This makes me sad and it probably means that I’ll never find the article again.

Come on..  Delicious..  I know it is easy to fall into the void when you are purchased by a company such as Yahoo but someone there must care a little bit..

C-SPAN online coverage of debate

C-SPAN has a really interesting site for showing the debate videos. It has a transcript search, a blog aggregation, a twitter message board and so on.

Here are some screen-shots:

Transcript along with video

Twitter and Blog Aggregation:

Transcript Keyword Visualization (wish you could drill down):

This might be even more interesting: Performance Group Blends Video Art, Public Service
“Three MIT grads have devised a way to “remix” the presidential debates — live. Friday night in Boston, they used custom computer software to analyze the candidates’ movements and speech patterns in real time, with a nightclub vibe.”

“Google’s Idea of Primetime”

Over at Shelly Palmer’s blog I wrote a comment in response to his thoughts on a recent Google presentation where it was noted that kids and teenagers weren’t consuming YouTube as much as previously assumed. He discussed some possible alternatives (video games, comic books, Long Tail) but missed one very important point. (I left the following as a comment on his blog but the formatting was terrible so it is below.)

Regarding the consumption of media by teenagers, I think you are missing one very important aspect of online life, particularly as it relates to teenagers: Talking.. Every teenager that I have talked to and asked what they do online has said one of the three following things: MySpace, Facebook and AIM.

I break this down as follows: Constructing identity, meeting people and talking with them. Sharing media and consuming media, I believe are aspects of that but take a back seat to the primary socializing behavior. I think it is possible that we are entering an era much more radical than the rise of the “Long Tail”, we just might be going back to individual and small group storytelling as the primary media.

Come to think about it, it isn’t that radical, only seems different from the norm if you were born between 1940 and 1990.

Networked Video in 10 Years : Networked Video == Parseable Video

Recently, I had a chance to discuss what online video might look like in the next 10 years with a group of very smart people at the Video on the Net: Beyond YouTube? breakout session at the Beyond Broadcast conference.

There are those who beleive that the video internet is currently going through it’s growth spurt much like text internet did in the 1990s. In some respects, I very much agree. The phenominal growth of activities such as video blogging, aggregation, playlisting and podcasting have gone far to make video a normal part of the web.

In other respects, I see a long road still ahead. Mike Lanza of Click.TV outlined a thought that is very pertinant. He stated that in the current iteration of online video, interaction, particularly social interaction occurs around the video with tools that are firmly based in the world of the textual web: tagging, commenting, sharing and the like. This is evident all through the popular video aggregators and video blogs, a quick trip to YouTube should illustrate enough.

Of course, there is more that is happening. People are remixing, starting to make comments in-time with video, people are creating videos in response to other videos but these are certainly not the dominant forms.

It is obvious that online video must and is taking a different form from the video that we have all experienced over the past 50 years (namely TV). It is on-demand, lean forward and nessecarily of limited quality and duration.

What is slightly less obvious is that current iterations of the popular online video formats are black boxes. They depend on the the text around them to provide the context and searchability. Metadata, which could provide some of this information is non-standard if existant at all. In other words, we are moving from a pure text internet to a multimedia internet but that multimedia in order to be useful needs to be described or put back into text in some manner.

Now, I am not saying this is a bad thing or useless thing. We can scan text, pull out key points in a non-linear fashion, navigate through text. None these things are easy with video in it’s current form. Video is rich and has tremendious emotional impact but it also has a lot of baggage.

One of the the things we discussed in our group discussion was “What would a video wiki look like?” A wiki being a very successful example of many of the things that the web was originally designed for. Wikis are open platforms for anyone to write, edit, erase, converse and otherwise publish content online.

Unfortunately, no one really had an answer. There are thoughts that collaborative editing platforms are getting there but editing is only one aspect of the language of video. There is also all of the production in the first place. Perhaps wikis just don’t translate into something where there is an infinite number of variables. In text, language adds some semblance of the finite, in video there isn’t a defined language with parseable portions.

My thesis here (and this is not new nor original) is that for video on the net to reach the relevance of text on the net, to be truly searchable, scannable and sharable it must be parseable at the very least. We must be able to hyperlink to portions, drill deeper within it, copy and paste it and search it.

What would a video wiki look like?

Last a note: Researchers, Academics, Cinematographers and practicioners who use video have been talking about these issues for as long as video has been around. This is not a new conversation but certainly one that is becoming increasingly relevant. One place that you might find people discussing these very issues is netvidtheory Yahoo Group.

Some meta learning from Beyond Broadcast 07

First of all, major kudos to Steve Schultze and the rest of the folks that took on the organizing this year. Great job!

Some other things I knew but was reminded of regarding events and conferences of this sort:

1: It is much better to be a participant (audience member) than organizer just for the fact that you can actually pay attention.

2: Most events, Beyond Broadcast included, work better as a one day events than two day events. Energy and participation are sustained much more effectively. Cutting portions out can be tough but in the end probably the right thing to do.

3: Given the opportunity, people will enthusiastically participate in documentation and “continuing the conversation” using web technologies. Utilizing something as simple as a common set of tags to be used across sites and platforms is an effective means to enable and encourage participation. People want to and will gladly contribute if the structures for contribution have low barriers.

Want some evidence: Try the tag “beyondbroadcast” on flickr,, youtube, google blog search, technorati, and so on.

ps. If anyone has an IRC transcript or if anyone did any reporting from Second Life (I am using the word reporting very loosely) I would love to be pointed to them.

Beyond Broadcast 2007

Beyond Broadcast, a conference that I was involved in organizing last year is happening again this year. It is coming on the heels of the Integrated Media Association’s Public Media Conference in Boston which is definitely a good time to have it.

Last year was great (despite the rain) with a series of fantastic talks and panels (check the archives). This year promises more of the same.

Presidential Candidate using YouTube

Tom Vilsack has posted a video on YouTube that is his first foray into “videoblogging”. He definitely has someone knowledgeable advising him as he says the right things, encourages conversation and wants to hear from us. Good deal.. Let’s see if he follows through.

Tom Vilsack on using the Internet and Videoblog

ITJ Project Beta Released

Interactive Tele-Journalism
So.. I have finally released ITJ on

With support from Konscious and Manhattan Neighborhood Network we have packaged and uploaded the latest version and it can be downloaded at: