Wednesday, June 27, 2007

Plone notes from Raik

Some months ago, I set up a plone site for our group -- http://sfiles.embl.de .
Mac asked me to blog my installation notes, so here they come:

The basic steps are:
- create a zopeuser on your server
- install python and zope
- install plone as a product into your new zope server
- create a plone instance
- optional: put it behind apache

Note: Some things may have changed with the more recent versions of Plone / Zope. These are just my onw quick-and-dirty notes. I hope they are still helpful.

Initial setup
=============

- Create zopeuser

- add zopeuser to sudo list without password
in /etc/sudoers add::

zopeuser ALL=(ALL) NOPASSWD: ALL

Python
======

- download 2.4.3 from python.org

tar xvf Py*
sudo mv Python-2.4.3 /usr/local/src
cd /usr/local/src/Python-2.4.3
./configure
make
make altinstall ## don't override default python

-> creates /usr/local/bin/python2.4

## local installation: ./configure --prefix ~/data/local

Bibutils
========

optional, only needed for the literature management

- download bibutils_3.27_i386.tgz from
http://www.scripps.edu/~cdputnam/software/bibutils/

sudo mv bibutils_3.27/* /usr/local/bin/

Subversion
==========

sudo yum install subversion

Zope installation
==================

- install Zope from source, download untar, then...::

mv Zope-2.9.3 /usr/local/src
cd /usr/local/src/Zope-2.9.3
./configure
make build
sudo make install
cd /usr/local/Zope-2.9.3/bin

mkdir /usr/local/Zope-2.9.3/instance
./mkzopeinstance.py
folder: /usr/local/Zope-2.9.3/instance \
user: admin

- make sure Zope folder belongs to zopeuser

- check $PYTHONPATH
- start zope server::

cd /usr/local/Zope-2.9.3/instance
./bin/zopectl start
>>> . daemon process started, pid=26472

- log into ZMI::

http://sfiles.embl.de:8080/manage


##local installation::

./configure --prefix ~/data/local/Zope --with-python ~/data/local/bin/python2.4

Install Plone
=============

- download Imaging-1.1.6b1.tar.gz from http://effbot.org/downloads/#Imaging::

scp 'grunberg@kaa.embl.de:data/input/Imagin*' ~/input
untar...
sudo mv Imaging-1.1.6b1 /usr/local/src/
cd /usr/local/src/Imaging*
sudo python2.4 setup.py install

- download Plone-2.5.tgz and install all needed Zope products::

gzip -d Plone*
tar -xvf Plone*
sudo mv Plone-2.5 /usr/local/src/
cd /usr/local/src/Plone-2.5
emacs CMFPlone/INSTALL.txt &

rm -r Five ## for Zope 2.9.x
rm *.txt

mv * /usr/local/Zope-2.9.3/instance/Products/

- restart Zope
in ZMI/Control Panel: Shut down Zope
in /usr/local/Zope-2.9.3/instance: ./bin/zopectl start

- install Plone site
in ZMI/ (root folder): Add/Plone site
ID: project

- check: Plone site should be accessible under
http://sfiles.embl.de:8080/project

put Zope behind apache
======================

- check that "Virtual hosting" exists in ZMI root
- work through
http://plone.org/documentation/how-to/plone-with-apache-1.3

in short:

- add to /etc/httpd/conf/httpd.conf::


ServerName synplexity.embl.de
ServerAlias www.synplexity.embl.de
ServerAdmin webmaster@synplexity.embl.de
ProxyPass / http://localhost:8080/VirtualHostBase/http/sfiles.embl.de:80/projects/VirtualHostRoot/
ProxyPassReverse / http://localhost:8080/VirtualHostBase/http/sfiles.embl.de:80/projects/VirtualHostRoot/



Deny from all


- (re)start apache::
sudo /usr/sbin/apachectl graceful

- check: Plone site now accessible under
http://sfiles.embl.de/

- direct access to Zope w/o apache is still possible with
http://sfiles.embl.de:8080/sfiles

Now, the plone site should be up and running behind your apache server. The remaining installation log describes the installation of some additional plone plugins. I'll just give the list of plugins I found useful:

- ATExtensions -- needed by most plugins (perhaps included in recent plone versions?)
- CMFBibliography -- very neat literature management
- AmazonTool -- to import book citations from Amazon
- ATBiblioList -- create virtual literature lists

I put up a little first-steps page on sfiles to get users started:
http://sfiles.embl.de/help/first_steps

There are some more notes on customizing sfiles and create custom workflows-- I can post them later if you are interested.

That's it... questions / comments are welcome.
Good luck!
Raik

Friday, March 09, 2007

reinstalling drupal, preparing for live & dev sites

Ok, I'm archiving my current dev install of drupal (which is bloated with modules, some of which have altered the mysql database) and starting with new install of 5.1. Once I have just the right constellation of modules locally, I'll copy them over to our live site (parts.mit.edu/igem07). Here are the mysql commands I used to reset the database for my new drupal install:
  • mysqldump -u username -p --databases drupal >/tmp/drupal.sql
  • drop database drupal;
  • create database igem2007;
  • GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES
    ON igem2007.*
    TO 'drupal'@'localhost' IDENTIFIED BY 'foopass';
Then I grabbed drupal 5.1 and followed the install instructions to unpack the files and move them into my webserver's base directory. So, from the terminal:
  • cd /Library/WebServer/Documents
  • rm -rv *
  • curl -LO http://drupal.org/files/projects/drupal-5.1.tar.gz
  • tar -xzvf drupal-5.1.tar.gz
  • mv drupal-5.1/* drupal-5.1/.htaccess ./
  • rmdir drupal-5.1
  • chmod 777 ./sites/default/settings.php
    - this part isn't actually mentioned in the install.txt; see my earlier post
  • visit the drupal site root via web browser and supply database info
  • chmod 644 ./sites/default/settings.php
  • mkdir files
  • sudo chown www:admin files
  • sudo chmod 755 files
That's it! Drupal 5.1 has been restored to virginal status. I'll post the modules I decided on and the configuration details next.

References:

Thursday, March 08, 2007

An Open Scientific Future (or, an email to OWW)

"If you had five minutes on stage what would you say? What if you only got 20 slides and they rotated automatically after 15 seconds? Would you pitch a project? Launch a web site? Teach a hack?"

Check out Deepak Singh's presentation at Ignite Seattle, An Open Scientific Future.

I liked his observation that scientists already have pretty good access to information via tools like ncbi, but to use sites like that you really have to know what you're looking for quoting (he showed a quote by Jon Udell: "Effective search depends on reservoirs of tacit knowledge and conscious skill. Some people possess much deeper resevoirs, and/or can tap into them more effectively, than others. That makes them valuable.")

He then said "I do know what I'm looking for but I can't share that information with the rest of the world, it's limited to me. And that's the challenge, and that's why science needs to get open. Because historically it resided inside us completely." He then talks about some current examples of systems that allow the community to share that expertise in finding data.

Also, Austin and I just checked out his blog and noticed his post about OWW and that OWW has now got a technorati tag. Also also, I just noticed that someone named pedro is mentioned on deepaks blog as interviewing Jason, so Jason, is this all old hat to you?

Saturday, February 17, 2007

second to the right, and straight on 'till morning

Seeqpod is cool. I would love to find (or build, if I could) a web app that let me and a group of friends chat while listening to and collaborating on a collective music playlist. (Could ajax be used to stream the current song from the person who had added it to everyone else's browser?) I constantly want to share what I'm listening to with my friends and get their reactions, and listen to the music they think I would really dig. I can imagine keeping one browser window open on the collaborative list and just listening all day (gosh, maybe the site could watch a user's last.fm page to see what they've scrobbled recently and suggest that as the next song in the list.)

I don't know if I'm just paying more attention to what's happening online these days, or if it's actually the case that the rate of the creation of interesting and sophisticated web services & applications is accelerating. It seems like every day a new one is on someone's radar, remixing and building from what was on the radar yesterday. It's bewildering, but it's wonderful too, and it's the sort of thing presaged by ray kruzweil and everyone in the "Singularity" camp.

Or is this just the inevitable feeling of getting old? Take solace, for the internet is humanity's Neverland.

Wednesday, February 14, 2007

setting up parts.mit.edu/igem07

Also see the earlier post on setting up drupal 5.0 in osx 10.4.
  • setup crontab (also see here):
    crontab -e
    # m h  dom mon dow   command
    55 23,5,11,17 * * * /usr/bin/wget -O - -q http://parts.mit.edu/igem07/cron.php
  • Setup files directory (Note: on osx apache runs as user 'www'. It's different on our server, which is running some flavor of ubuntu/debian: the user is 'www-data'. Find apache's username on your system with 'ps aux | grep apache'.)
    sudo mkdir files
    sudo chown www-data:admin files
    sudo chmod 755 files

Friday, February 09, 2007

Building Blocks

To summarize the Features of iGEM2007.com:
Rich user & team profiles; Team blogs, Forums, Wiki; Team-centric user access control; Google maps + teams mashup; RSS for everything; Limited, distributed (amongst team leaders) user registration; Feedback & submission of team application; Help docs based on Books, oh yeah, and tagging and digging, of course.

How do we build it? Hopefully a functional skeleton will coalesce out of the following list of drupal resources:

practical demonstrations & explanations
modules
  • Overview of Drupal 5 components & example modules - from api.drupal.org doc
  • Creating Modules (a tutorial): Drupal 5.x - "The full tutorial will teach us how to create block content, write links, and retrieve information from Drupal nodes."
  • php demonstrating how to view a list of nodes on a google map using the views, location, and gmap modules
  • Gmap - API lets other modules use google maps, can generate basic maps in any node from gmap macro code. Integrates with the location.module. Port to 5.1 still unstable.
  • Location - API for geocoding (only u.s. and some of ca & de?), can add location fields to nodes, can do proximity searches on nodes. Port to 5.1 still unstable.
  • Views - "provides a flexible method for Drupal site designers to control how lists of content (nodes) are presented... This tool is essentially a smart query builder that, given enough information, can build the proper query, execute it, and display the results."
  • CCK - "allows you add custom fields to custom content types using a web interface. In Drupal 5.x, custom content types can be created in Drupal core, and the Content Construction Kit allows you to add custom fields to any content type." Here's the list of all Dupal 5.x CCK-related modules
  • Node Profile Module - "This module builds user profile's as nodes, which ... allows using CCK and its field types as well as the CCK form builder. For a maximum of flexibility it'll be also possible to use custom node-types and modules instead of the CCK." Also, it's integrated with views.module.
  • Organic Groups (og) - "An organic group is created by a single group owner, who has special permissions including the ability to delete the group the owner created. Group subscribers communicate amongst themselves using the group home page as a focal point." - basis of "team portal" pages? Or could it just be done with a taxonomy? Also see og list manager and og views. Alternatively, see mailhandler for a non-og way to link forums w/ lists.
  • Converting 4.7.x modules to 5.x
themes
  • How to theme CCK input forms - "So, as a non-developer, semi-technical, marketing/business type person, I set out to discover how to 'theme' my input forms." Breaks it into 4 steps: creating CCK content type -> create [yourcontenttype].tpl.php -> modify template.php -> modify style.css
  • Content Templates module - CCK can generate ugly content; this module adds a "template" tab to CCK editing pages pre-populated with CCK's default layout, making it easy to customize how the fields are output. "But Content Template can actually be used on any node type and allows modification of the teaser and body properties before they go out in an RSS feed or are handed off to the theme."
  • Converting 4.7.x themes to 5.x

Good to know
  • Customization & theming handbook on drupal.org (extensive) - "Included in this section are PHP and SQL code snippets and examples for use in your sites pages, blocks and themes. There are also a few articles on theming engines, which provide the infrastructure to build and create new themes."
  • Basic syntax for getting accessing the drupal CVS repositories
  • Google Maps API documentation - if only all documentation was this good...

Features of iGEM2007.com

I've looked at a lot of drupal modules now and found much more seemingly-useful documentation. I can't even keep it all in my head at the moment, so I'm going to gather it all together in this post. Hopefully it will be easier to see how it all might fit together once its clustered on the same page. But first, I'm going to briefly describe the goals of the site again, the basic features, to help contextualize the following list of drupal modules & techniques.

Profiles
Users and the teams they comprise should have rich profiles that are easy for them to edit and easy for us to remix in new views.

Content Types
Users can create and revise a variety of content types: their profiles ( which will really be a synthesis of several different content types), posts to their team's blog, certain elements of their team's profile, their team's wiki pages, forum posts, comments, and eventually, media rich content types like embedded youtube / brightcove video and their history of promoting / digging all of the aformentioned types of content (perhaps not comments).

User Access Control
Users can only edit content that is associated with their team (i.e team blog, team project abstract) or associated with the entire site (i.e. comments, forums) - they shouldn't be able to edit another teams content.

Team Map
The team profile should contain fields that geolocate the team (either address or lat / lon). The site should feature a mashup of all the teams on a google map, presenting content like the team name, picture, abstract, and interesting recent activity like blog posts (& forum posts & promoted content).

RSS
RSS feeds should be available for all episodic content: team blogs, forums, and other interesting recent activity (diggs, new vodcasts).

User Registration & Team Application
Teams must satisfy several requirements before admitted into the competition:
  • team leaders (2+ faculty members) must register all of their team members for user accounts (establishing the roster),
  • submit the team proposal, which explains the general logistics of the team's project (where they will get wetlab space, reagents/materials, a place to meet as a team, funding for the entry fee, materials, trip to the jamboree, student stipends, etc.),
  • send us the entry fee,
  • and complete an initial portion of their team's profile focused on administrative information such as shipping address (and lat/lon if we can't geocode it), team name, official team name, affiliated institutions, etc.
The website must provide feedback to teams (on their profile page?) on their progress through the application process, making it obvious to them and us what steps have been completed and which are left.

Help Documentation
The new site should take advantage of the types of content built for writing documentation (like Book) and port over all the help documents from the 2006 igem wiki.

Wednesday, February 07, 2007

iGEM2007.com drupal dev notes, feb 07

TODO:

I'll update this list as I learn.

OpenWetWare 2.0

TK pointed out in a conversation today that theoretically, what we really need for recording labwork digitally is something that provides the episodic, syndicated content of blogs with the collaborative and revision qualities of a wiki.

Much of the utility of such a system would come not just from having a record of your work online, but from the network effects engendered when that information was well-formed and tagged and shared in a quasi-standard way (the lower-case semantic web) with a community of similar users. The devil is in the details, and sharing information at such a fine level of granularity in a way that lets users pay attention to just what content is most likely to be relevant (via tags and perhaps some predictive metrics) could help users not make the same mistake collectively more than once. It might even allow users to perceive and draw conclusions about phenomena that were too subtle or rare to be investigated previously.

Help develop iGEM's drupal-powered site!

Hello, are you experienced with building sites based on drupal and/or knowledgeable of PHP and not averse to learning about drupal? If so, join us!

The International Genetically Engineered Machines Competition is now entering its third year and is expecting an increase in participation from 34 undergraduate teams from around the world to about 80-100. The teams spend the summer learning & doing genetic engineering and then present their work at the beginning of November here in Boston at MIT.

Last year we required all the teams to document their projects on a mediawiki server (http://parts.mit.edu/igem) - iGEM is not just an event that happens in November, although that's when it is most visible, but throughout the whole summer, and we wanted to capture as much of that experience online as possible. We built another website to explain the competition and link into each team's wiki page from a world map (http://igem2006.com).

I am developing a dynamic, community-driven site for the iGEM 2007 competition. I've spent the last two weeks investigating CMSs and laying out what features we want the new site to have, and I'm fairly sure Drupal is the base from which we should start. However, it's clear that we are going to need to develop a couple custom modules to support unusual features of the site. For instance, we will have about 1000 users, each a member of a certain team. We don't really want user accounts for anyone else. So we need to develop permissions for users based on which team they are in, and streamline how participants are given accounts ( i.e. manually authorizing 80-100 team leaders, who then can authorize their own team members). We also want each team to have its own portal page, dynamically generated from posts to the team's collective blog, recent forum activity, recent activity on our genetic parts database, and from certain pages from that team's space on our mediawiki.

I am trying to learn drupal and php as fast as I can to implement what we want, but I don't think it will be fast enough. Would you be interested in helping, or know anyone who might? How much would consultation / module development cost? If nothing else, pointers into the right direction would be very helpful.

If you are interested, I encourage you to check out our introductory site (I didn't design it!) at http://igem2006.com, and perhaps my development blog at http://cis-action.com.

Tuesday, January 30, 2007

Microformats

As far as I understand it, besides using templates and categories, there is no way to really structure the content on a mediawiki page. For example, every team on the iGEM 2006 wiki provided a picture and a project abstract somewhere. The information was available, but not accessible without visiting every single team's page and actively looking for it. This year, one of our goals for the iGEM 2007 website & wiki is to make sure this kind of information is tagged, or marked-up, or annotated, or put in a special area on a template, or by some other method standardized across all the teams. If information common to all teams is standardized, it will be much easier to find and reuse, from both a human and machine perspective.

I haven't learned much about it yet, but I'm excited about microformats (also see Alex Faaborg's blog). If you already know about them, please let me know what you think. Here's popular definition from the microformats website: "simple conventions for embedding semantics in HTML to enable decentralized development." They are basically just standardized xhtml tags, and so should be easy to integrate with mediawiki content. The biggest hurdle would be making them simple for users to use.

Here's an example of the adr microformat:
32 Vassar st.
MIT 32-314
Cambridge, MA 02139
U.S.A.

N 42° 21'42.94
W 71° 05'28.36

It looks normal, but check out the source - the address has actually been marked up with the extra xhtml. Software agents, either in the browser (see operator) or scraping the page from elsewhere, should be able to understand the address.

The registry is one attempt at combining a database of user-submitted structured data and totally freeform wiki pages: special perl scripts provide a seamless interface between the registry database and what looks like normal wiki pages with forms on them. However, that solution does not seem as flexible or granular as the microformats; we need to find a way to make standardizing so easy everyone will do it most of the time. The microformats are good at letting users standardize a little bit of information on any wiki page. It would be hard to anticipate what or where that information would be in advance and then build forms.

I imagine special little buttons on the wiki wysiwyg editor that appears when users edit a page that forms their information in the right way. A user can press the address button which produces a template of the xhtml right in their article, just like the link and media buttons do.

EDIT: I just realized that Operator doesn't support the adr microformat (as I understand it), so I'm adding our lat & lon in the geo format.

Monday, January 29, 2007

Diving into Drupal, or, List 'o' Modules

This post of links should help me dive deeply into drupal.

First, getting started with content in Drupal: Node Types.
Also see the docs on Drupal's Taxonomy system.
Also see the great IBM intro to developing a collaborative web site.

Most of the non-essential core modules listed here will be useful (comment, forum, node, profile, search, statistics, taxonomy, tracker, user).

Here's a list of interesting contributed modules I'd like to learn more about:
  • Simplenews: Simplenews is a simple newsletter module which allows both anonymous as well as registered users to subscribe to different newsletters.
  • Massmailer: manage mailing lists (based on PHPList
  • Listhandler: synchronize mailing lists and forums
  • Organic Groups: Enable users to create collaborative groups (incompat. w/ taxonomy?)
  • Organic Groups List Manager: integrated mailing list/forum for OGs.
  • Node Vote: a node voting system
  • Interwiki: wiki syntax for linking
  • URL Filter: automatically turn URL text into hyperlinks
  • User Points: users gain points as they do certain actions
  • Tagadelic: weighted tags in a cloud
  • Privatemsg: an internal messaging system
  • Pathauto: automatic path aliases for nodes and categories
  • Biblio: manage lists of scholarly publications (including .pdf upload)
  • SPAM: tools to stop unwelcome posts
  • Services: standardized API for drupal
  • Google Analytics: free advanced web stats

Installing Drupal 5.0 on OS X 10.4

I started by archiving and emptying /Library/Webserver/Documents (I'm using OS X), then unpacked the drupal-5.0 archive there (without the containing drupal-5.0 directory).

I couldn't find the .htaccess file mentioned by Lullabot. I think it might have gotten lost as I moved the files around graphically using the Finder. It was present, however, when I used the commands suggested in the install instructions. Then I got the following error navigating to the install directory with firefox:
The Drupal installer requires write permissions to ./sites/default/settings.php during the installation process.

So my permissions are effed up. Boo. Just doing

chmod 777 ./site/default/settings.php


seems to have corrected the problem.
The location of the programs for connecting to and administering my mysql database haven't been added to my PATH, so I have to remember to go to /usr/local/mysql/bin for now to run them. I created a user for the drupal table as outlined in the INSTALL.mysql.txt.

Great. That seems to have worked. I changed the permissions on the settings.php (back) to 644. And Drupal 5.0 is online. Amazing.

Made a ./files directory and
sudo chown www:admin files
sudo chmod 755 files


Apparently I didn't install the GD library with PHP, so I'll have to recompile it.
Oh! Looks like I can use a precompiled binary put together by www.entropy.ch. I've got Apache 1.3.33. To successfully install the new PHP module (with GD support!), I've got to uncomment the lines that enable the pre-installed PHP module in SERVER_CONFIG_FILE="/etc/httpd/httpd.conf"

Goody! Now my system has PHP 5.2.0 with a bunch of libraries, including the GD library. However, the switch seems to have broken the connection between MySQL and PHP - Drupal can't get to the MySQL database. Ah, PHP is looking for the MySQL socket in /tmp/mysql.sock - exactly the opposite of the problem I was having when installing Vanilla forums a couple weeks ago. The solution was to change the php.ini file (now located at /usr/local/php5/lib/php.ini) to override the compiled-in default and look for the MySQL socket at /var/mysql/mysql.sock, which is "more secure," according to this apple developer document. It took me a while to realize the personal web sharing control panel buttons weren't actually causing httpd to restart and reload the php.ini file, nor was apachectl graceful commands, for reasons I don't understand. Rebooting did the trick.

The only task left is to set up the cron jobs. Then I'll have a completely generic drupal install.

Edit: got the cron working (with curl - wget and lynx are not pre-installed binaries on OS X). There was a little trick to get the clean URLs to work - the httpd.conf file has to be changed to allow .htaccess overrides. The Mac OS X specific guidelines explain it:
In httpd.conf (in /etc/httpd), locate the following section and allow overrides, so that Drupal's clean urls will work (they depend upon rewrite rules in .htaccess). You'll need to be root (or sudo) to do this. Don't forget to restart apache after modifying httpd.conf (turn personal web sharing off, then back on again, or use /usr/sbin/apachectl restart).

Friday, January 26, 2007

Content Abstraction in the Joomla! CMS

I've been evaluating Joomla and Drupal (and looked briefly at Plone) for the CMS of iGEM2007.com Superficially, I've gotten some bad vibes about Drupal from the developer of a big site that is based on it, popsugar.com, and from attendees of MashupCamp3. Joomla seems newer and brighter and in a way, more promising. But it doesn't seem to have as robust documentation or as established a user & development community as Drupal, and it also doesn't seem to be as flexible in terms of extending it in ways the core developers hadn't expected. I feel pretty confident that we could use and extend Drupal to do what we want, but I'm not sure about Joomla, so I'm giving it one last hard look-over. I'd like to use it if we can.

I found the following buried in a visual tutorial in part of the Help section of Joomla.org. I don't know why it isn't prominently displayed on the first page of the developer docs - I think it should be.
One way of looking at Joomla is that a Joomla site really only consists of one page (plus a lot of content stored in a database). As you click on menu items, Joomla rebuilds the content of that one page, as if you are navigating to another page.

When you click on a menu item, that menu item is going to load a single main piece of content, such as an article or list of articles, or calendar or whatever else into wherever you've specified to be the main content area in index.php.

In addition, modules, that is, all of the smaller items such as menus, are going to either display or not depending on whether you've configured them to show for that menu item.

The idea of Joomla is to build a site not by creating pages, but by configuring menu items. A given menu item will load a particular main content item, plus whatever modules you want. The modules are always displayed in the location as set up by index.php. (However, you can make it appear as if modules move to different places on the page, however, by cloning modules, placing them in more than one location, and then hiding or showing them depending on the menu item.)
Great insight! But I'm still leaning towards drupal...

Friday, January 19, 2007

Promote / Digg anything

In the post about Team Blogs I mentioned a page that would show the latest updates to all the team blogs and some kind of "Digg It" system that would let community members promote a cool blog post to the Community News Feed on the main page. This promotion mechanism was a consistent theme in the brainstorming sessions, and the general consensus was that it would be really cool if, once a user was logged in, she or he was able to Digg just about anything - any thread in the forum, any blog post, any part in the registry, and any wiki page.

It would probably be useful to see a list of all recently Dugg items, and certainly it would be useful to see a "Most diggs of all-time" list. Furthermore, I think it would provide a real incentive to write interesting posts in the team blogs, particularly if getting a post dugg (just once? how many times?) would promote it to the igem2007 main page and into the Community News RSS feed.

But despite the simplicity of the Digg operation, it seems like this could be a pretty complicated thing to implement across the entire site. And how do we deal with team members digging eachother's content?

Another way of thinking about the digg feature might be to contextualize it as a "Favorite this" operation. Users should click the favorite button next to any content that they want to show up in a dynamic list. I could see myself clicking this for help documents or discussion topics I like to visit often... but I don't think it would work for promoting blog posts. Hmm.

Team Blogs

I want the new site to provide a window into the experience of iGEM as it is happening, and I think the wikis last year did a poor job of capturing the "instantaneous narrative" of each team, although to be fair, that was never an articulated goal. Nonetheless, I am sure blogs would do a much better job than wiki pages. My intuition is that even if their usage won't be any more familiar to team members than that of a wiki, their purpose is much more clear. What I mean is that the very concept and structuring of a blog suggests a certain kind of information that a wiki, even a page called "team blog", wouldn't because of the associations with openness and flexibility and revision wikis usually have.

That said, even if we provided a little dialog box for team members on their team portal page to update their team blog, would anyone use it? It's hard to say. I hope that they would, and meagan and I want to set a good example and have a registry blog running by the springtime. The purpose of the blog is for the teams to share their experience with the community, and we are going to provide a central aggregation page showing the latest posts for all of the teams, as well as the ability to Digg or Promote the coolest posts to the Community News Feed on the front page, as well as offering RSS feeds for everything. Hopefully this will provide enough of an incentive for at least someone on each team to post to the blog every week or so.

Structuring Team & User Data: Team Portal pages

Prelude
James Brown, Brendan Hickey, Kim De Mora, Randy, Meagan, Tom, and myself have talked a lot about what features we want the igem2007.com site to have. I have about 15 pages of scribbled notes and drawings exploring and defining these ideas with various degrees of clarity, but nothing that ties them all together. So I'm going to describe each of the main features we want separately, with the hopes that doing so will make it easier to tie them all together afterward. And because this is a blog, I'm going to describe each one in a separate blog post, mostly so that it will be easier to comment on specific ideas, but also because hey, blogs are supposed to be bite-size.

Structuring Team & User Data: Team Portal pages

One of the main focuses of the new site is on dynamic content. The main page is a place that everyone in the community should want to frequently visit because it provides an instantaneous snapshot on the state of the community. To that end the site must be built so that the content users and teams put online can be aggregated automatically into composite pages like the main page. Last year we required the teams to build at least one page on our wiki to represent themselves to the community, and encouraged them to put up much more content - but we left the organizational scheme up to them. The content and structure each team settled on was often similar, but never the same, and so a fair amount of effort was required to find the same information for different teams.

So, the new site will feature a Team Portal page for each team, dynamically generated from content teams upload via specific forms or perhaps even wiki pages (marked up using microformats so we could scrape the pages?). Here's a list of specific pieces of content we will want from each team:
  • Team name (short name, long name, official name)
  • Team picture (800px wide)
  • School logo (128px x 128px)
  • Project Abstract
  • list of links to main wiki pages
    • Elaboration of Project description, updates
    • Calendar
    • Protocols
    • Etc. ( other favorite links)
  • Team Blog
All of the data will be arranged in the same way on each portal page, ensuring a visitor to any team's portal page will be able to find the same information where they expect to find it.

I believe that part of the problem with the team wikis last year was that there was a general ambiguity about what was to be done with them. Teams knew they had to put something online, but there were no clear or specific guidelines as to what it was we wanted or how to do it. It's the structured data problem again. So in a lot of cases, those teams that did use the wikis ended up using them for two purposes: for project management, i.e. scheduling and posting the results of experiments, and for telling the team's story, i.e. introducing the members and giving the background and progress of their project. I think the wiki is an ok way for teams to get this content online, but we have to be help them be more intentional about what information goes where.

Thursday, January 18, 2007

MashupCamp3 (at MIT) - post 2

Lots of links from MashupCamp3. I really should put them into del.icio.us, but my tags are so messy and unorganized I will just list them here until I have time to overhaul my bookmarks.
Be sure to check out Dapper & openkapow - they are awesome. The names alone are almost enough for this crappy non-metadata flat list, and I'll add tags and descriptions when I put them into del.icio.us, but just how much more useful will the contextualized collection be? What are the positive benefits, in all practicality, of using a site like del.icio.us? Network effects of the folksonomy?

MashupCamp3 (at MIT) - post 2

Lots of links from MashupCamp3. I really should put them into del.icio.us, but my tags are so messy and unorganized I will just list them here until I have time to overhaul my bookmarks.
Be sure to check out Dapper & openkapow - they are awesome. The names alone are almost enough for this crappy non-metadata flat list, and I'll add tags and descriptions when I put them into del.icio.us, but just how much more useful will the contextualized collection be? What are the positive benefits, in all practicality, of using a site like del.icio.us? Network effects of the folksonomy?

Wednesday, January 17, 2007

MashupCamp3 (at MIT) - post 1

Well, I got back from BioSysBio 2007 and 2 great days of iGEM2007 work in Cambridge, UK yesterday afternoon, crashed for about 12 hours, and woke up for MashupCamp3, which happens to be located at Hotel@MIT this year and occurring today and tomorrow. I'm really hoping I get a chance to run some of the iGEM2007 ideas by the experience and expertise concentrated at this conference. Right this second, everyone in the room here at MashupCamp3 is determining the schedule of sessions for the day, because MashupCamp is an Unconference. It seems like many of the sessions are about mobile technologies and geospatial mashups.

BioSysBio

Well, BioSysBio 2007 has come and gone! What a blast! I was on the organizing committee and contributed mainly by videoing the talks and posting them on google video. About 2/3 of them are online now - just search for BioSysBio on google video. I'd like to post more about the conference and about some of the incredibly productive brainstorming sessions about the iGEM2007 website I had with James, Kim, & Brendan over the two days following the BioSysBio.... but, I'm at another conference right now, MashupCamp3 (@MIT)....

Saturday, January 06, 2007

Vanilla Forums... so sweet, so tasty

Vanilla is a forum engine that is simply beautiful. It is not clunky. It seems straightforward. It seems very extensible. It rocks. And I am hoping it will make one sweet foundation for our community forums at the igem2.0 website.

Anyway, after reading a bunch of the information at onLamp.com, I downloaded and installed MySQL 5.0.27 and enabled the PHP module in apache. There was a some confusion in all the documentation I was following as to how to set the mysql socket to the right location... apparently the binary I installed from MySQL.com (for Mac OS X 10.4 (x86)) sets the socket to /tmp/mysql.sock, but PHP expects it to be at /var/mysql/mysql.sock. Two Apple docs here and here indicate that /var/mysql/mysql.sock is better for security purposes, so I attempted to change the MySQL defaults to move the socket by making a MySQL configuration file at /etc/my.cnf that contained
[mysqld]
socket=/var/mysql/mysql.sock

[client]
socket=/var/mysql/mysql.sock
Unfortunately, this didn't quite fix things immediately, but with some very non-scientific, uncontrolled fiddling and reconfiguring, I got the the server online. So now we have a pretty cool forum. Oh, also, for the record, I tried hardening the default install of MySQL by removing anonymous access and defining real, encrypted passwords for the remaining accounts... but I have no idea what gaping holes I'm leaving open.

Next up, extensions for Vanilla and choosing a blog content management system.

Friday, January 05, 2007

parts.mit.edu/igem 2.0

Over the next month or so I'll be redesigning and rebuilding the iGEM website. My design goals are to make it prettier, simpler, easier to use, much more dynamic, and above all, overflowing with features and functionality that naturally generate a much stronger sense of community than what we currently have.

So far Randy and Meagan and I have done some brainstorming and I've started doing some detailed concept sketches of the features we talked about. After I finish with those, I'll make some even more detailed mock-ups in photoshop. I'll explain and show a bunch of these features in another post. This is just to get my foot through the blogging door again.

While the main site is in development, we want to have a discussion board and mailing list online at a skeletal interim site about iGEM2007. I am really excited about the Vanilla discussion forum. It looks... awesome. Today my goal is to get a test version of it running on my machine here, blamo. I found some nice tutorials on getting into the superficial layers of apache in OS X at O'Reilly's onlamp.com.