Tim Burke: * Directory, Emerging Technologies * People all over Red Hat work on Fedora * Community Development Principles * Fedora Community * Productize! Collaborative development, power of the whole compared to the sum of the parts. Ability to add new features, even with many times more employees, is a fraction of Red Hat's. Red hat does stuff for themselves as well as everyone. Community Development is as important with testers as it is with developers. Community benefit of testing brings lots of different combinations of hardware that you can't have in house. Redhat offers a gathering point to bring people together for a dialouge in a common spot. Through that awareness, you reduce duplication of effort. Project Relevance, Hospitality: * Different types of community projects are successful. * Most important part is to provide something useful. * Building rube goldberg machines aren't usually useful. * With no value, no one will help you. * Broad base of open source software - lots of options. * Also looking for completeness. * We want to open source the base, keep the rest. * Need to make sure that product is standalone useful. * How do you make an open source project successful? * Publicize * Community forums * Sourceforge * Web communities can allow you to enable: how-tos (Use, get involved, test, evelop, etc.) * Many projects don't have a good web presence (see sdlroads) * Stimulate people to understand where they can contribute * Well organized code * Good packaging for sharing (not only SVN!) * Pre-built binaries!! Not everybody is a developer, but some people could be testers. * Mailing lists: Primary medium. IRC is synchronous, and is good for some things, but mailing lists work for in-depth issues . * IRC is good for quick one liners * Not good for major design decisions * Wiki may fall into webpages: Fabulous ways of giving links, status of projects, etc. * Status page: Current release, status, TODO list * Receptiveness * RERO * Improves developer/user interaction * Wider testing, feedback sooner * Stuff happens faster * See progress, momentum Open source is not just "Throw the code out there and let people grab it." "Share your views as you go along, don't hold the code back." "In the kernel community: Have sene projects getting LVM into kernel. One of the major companies: 2 approaches. Ground up build by involving people: here's some preliminary design ideas. Other major company: we've got our own LVM that we could port to linux. We'll stay out of the discussions. We're going to work on ours." worked a year "200,000 lines of code: swallow that. Didn't go over too well. Who can review that? That's just a simple example why involving people can help.e Don't work in isolation. Disclaimer: Here's some preliminary code, everything sucks, but throw it out there anyway." In contrast: LVM was a monster pile of code. Netscape navigator->mozilla. One succeeded, one failed. People at Netscape recogniezd that foruming a community was important. They established a NP to help form an independent entity to foster development. Set up webpages, development list, IRC channels, Early draft versions. Kick the tires. Here's some incomplete areas. Wanted people to help out. The bigger the project, the more important it is to roll out the welcome mat. How do you coordinate so that you end up with something works. * Linus is a great example. Leader of the kernel. The kernel is just one small thing: the lowest level. "There's more to the system than just the kernel." 'Benevolant dictator': every successful project needs organization. Integration pool of the kernel, where people put the latest changes, and then they go into the bleeding edge pool, then some testing -- automated -- nightly builds, and allow people to access it. Keep that seperate from more stable pool. As a user, you can go onto site and say "do i want to test bleeding edge? Or stable?" 50% of the time, may not boot, really rough. Other place is here's the code, once a month once you're sure it's more stable. Release development milestones. Our next release is in 4 months. beta every 6 weeks. all changes have to be in the first base level. Final build is only critical bug fixes. Another thing is to not just let anyone make changes. Have to build up credibility. Start by being a tester. Then tests+fixes. Eventually "heather is pretty good, why don't we give her permit permissions". All these ways of imposing structure, schedule, guidelines: reviews, coding conventions, unit tests, etc. "upstream is they" * Multiple theys. Red hat is a linux distro. 1200 seperate packages. Huge number of parts, kernel, compilers, libraries, installer, OO, Thunderbird, web browsers, gimp, games, etc. Multiple communities developing each package, kernel, etc. Community of kernel file systems, kernel scsi, GCC compiler people. Groups of people there who do debuggers. gcc is a they. GNOME community, etc. Redhat collects all those thousands of projects, picks the best, and puts it into a product set. most active developers in a large number of these projects. * They are the lietenants, the people who have earned trust. Most development is open: the stuff that goes on behind closed doors is tech oriented: frustrating to outsiers, but the right thing usually gets done that way. Community development often takes a long time to get it right. Are open source developers selfish, or working for the community? * Some of both. Devils and saints. The majority is not business. They want to find a sense of group of peers. Nothing cooler than people who know they have code in a distro. People love the schwag: getting their name in the contributors file. Among the developers, they're there for the love of open source community. It's not a job, it's a way of life. Why is it important to participate upstream? * Proprietary si not an advantage. * Get patches into upstream * Everyone benefits from sharing and collaboration. * When redhat developes new features, we don't develop in isolation. Wider testing is good for everyone, red hat too. If you draw from the open source community, you want to be able to incorporate them. Merging upstream fixes, is a PITA unless you're close. The closer you are, the easier it is. RH9: 600,000 FC1: 560000 FC2: 90,000 FC3: 106k FC4: 104k (no xen) Kernel is not so much spare time anymore. Red Hat, IBM, Suse, Connectiva, Linus make up 30% of kernel commits. Neat benefit to OSD: Job application and employer. You can look at code, view interactions, and you can have a general idea of how the code and company works without having to be there. Open source lets you see the company, participate in it, before you work. Works both ways: can evaluate Redhat source, or evaluate potential employee's source. How they interact is just as important in software development. "I wonder if in college curriculum if there's a social side of collaborative development." You still ahve to play well with others in closed source, but doubly so in open source. Fedora community: Infinity+Freedom+Voice. Fedora is community distro. Collection of 1500 parts that let you press out CDs/DVDs. Built up fedora as community effort. Try to structure it so people can get invovled. Guided by fedora founation. Avoid license/legal problematic software. Centralized collaborative Development: RERO. RHEL includes Adobe Viewer in "dirty" CD: doesn't exist in fedora. Sponsored by Redhat, autonomous to redhat. Fedora: * Built for developers. * RH Engineers use it to develop * Community project, not a product. * THousdnads of testers. * Hundreds of patch submitters * Roughly 70-100 extras packagers * dozen Legacy Contributors. Core distro, plus: * Directory Server * Open Source Java * Documentation *Ambassadors *GFS *Extras *LEgacy *Upstream Devel *Translation *Triage *Xen Fedora Legacy * Generally for servers * Developed by OEM, consultants, * Fedoralegacy.org * RH 7.3, 9, FC1-2-3 * Some people want old version * Security fixes only Fedora Extras. Centralized repository. 100+ contributors. 95% of packages maintained by volunteers. Quality Standars, Default Repos in FC4, FC5. Too much crap will lead to not usable, so standards. Tim says Maddog shamed him into presenting. RHEL: Snapshot FC. Add high-end stuff: Gigantic disk support, 512GB mem systems, 64 CPU systems. Work with partners for large system scalability. Too many people to solve everything. Look at 1000s of requests, common themes, prioritize. How does FC look? May be too bleeding edge to productize. pick own direction, but a step ahead: not just respond to today, but respond for tomorrow. Oracle: hack the VM to be the best database server, but might be bad for interactive performance. Some changes may not go into RHEL unless they're in upstream. Upstream viability is extremely important. PUll in redhat, fedora, partners, upstream,on-site partners, beta testers Beautiful thing where we have (in westford), 3rd floor section populated by partners: people who aren't RH employees, but IBM/HP employees who are sent to westford to work with them. They can test it on their hardware first. RH is not a parasite. Lots of work goes directly upstream. Lots of ways for people to push stuff in. Success factors: Avoid feature creep Partner involvement. Input to requirements Betas Fedora Community is better at: * Attention to detail * Many eyes, wider range of hardware * Real world situations * Fun things "Love artwork for the desktop" "Games" "Lot of new and exciting tech that people dive into." Company is better at * Long term focus "Xen: Lots of people doing full time effort, can't do it effectively intermittent basis" * Boring things "Better documentation." "Translation" "More rigour that error paths are tested." "Long term robustness" * GCC 4.0, O(1) scheduler, fortify source, exec-shield * Expsneive things: Large hardware. "Far more inclusive for scalability issues." Interest in Dot Net: Talk about Dot Net on Linux * Technology called mono, which was developed by GNOME people. First of all, weren't thrilled about mono, because it's fundamentally not an "open" community. Didn't want to endorse it. Up until recently, no mono in fedora. Another reason: legal black cloud developed by microsoft, etc. * Lawyers have spent time and recently concluded that mono is safer to include, just in time for FC5t2 got in mono. Some of the main reasons are not because we love mono for mono but because there are more gnome based apps that take advantage of it. * Lots of users clamoring for GNOME tools that require mono. FC3: 0 configuration, thin clients, etc., clusterfs. Not clear development path * Red hat did open source GFS. Sustina developed cluster file system: multiple computers access same files. Trying to do it as a proprietary business. RH Spent millions of dollars on buying out sustina. Could have been paid-only. RedHat re-open sourced all of GFS. Helping it to grow. NFS: Huge foundation component of enterprise. Big role in virtualization. AFS is the andrew file system. Not an area of focus for RH. If you want to use computer in a NFS environment, but store latest versions on computer. that way you don't have to go over the wire all the time. CacheFS caches locally files that you've gotten recently. CacheFS is something that the main upstream developer is trying to push into kernel. Still basically in the design phase. Implementation into the integration pool, design issues, going back to drawing board. CacheFS may or may not play a prominent role going forward. Lots of challenge. All caches are finite size. Incorrect caching algorithms can slow things down. Ultimately CacheFS will only pertain to read-only data. Dunno if it's in the next FC.