Simon’s session Power Shell Usage: Bash Tips & Tricks was simply wonderful with many goodies telling you how to gain a higher productivity workin with the shell. Read the corresponding paper to go into details.
Page 37 of 49
UKUUG: Papers Online
Most of the papers of the UKUUG conference are now available online. Access them via the programme for example: click on a title and find the paper linked at the abstract page.
UKUUG: KDE Development
David’s talk KDE Development dealt with new features in KDE 3.2.
He introduced the audience to application scripting via DCOP which allows for inter-process ommunication between KDE applications. KDcop is a useful GUI to inspect running applications and processes. DCOP has bindings to many languages (also PHP?) and usually works on the local machine only, but can be enabled ot work over the network, but this feature is disabled by default
KTrader handles “service types” and associates applications and mimetypes, taking into account the user profile. I thought to myself that we should think of a trader within CONESYS that deals with DDO packages and associates those packages with a certain repository.
KParts is there to load application components in the manner of
KParts::ComponentFactory::createPartInstanceFromQuery[…]
, which is a new approach compared to older KDE versions, much shorter.
Something you can easily try out yourself is KDialog which displays a KDE dialog similar to a JavaScript alert(). A sample:
>kdialog –yesno “Run this script?” || exit 1
With KDialog, e.g. shell scripts can ask for user input.
KOffice is currently in the phase of transition to a XML based native format as defined by the OASIS office committee. This process (which is in the beginning) will provide simple interoperability between OpenOffice and KOffice. I noticed that KOffice has a flow chart application called Kivio which I will try out at home 🙂
Some notes on Konqueror: it will allow for sidebar modules a la mozilla and inline spell-checking. Also, it will be possible to integrate applications like umbrello to display UML diagrams.
The next KDE meeting will be the first one open to the public. Up to now, the meetings were only meant for KDE developers. From my point of view, this is a very important and the right step ahead to also include the users at meetings when developing an FOSS Desktop system. The often perceived dichotomy between developers and users must be bridged, as has also been discussed at this years OSCOM conference concerning Content Management Systems.
UKUUG: Free and Open Source Software in the Health Service
I was interested in learning about an area of software engineering that I have not dealt with yet, hence I attended Anand Ramkissoon’s talk Free and Open Source Software in the Health Service.
Anand outlined the history of medical lab software:
In the 1980s: various in house systems have been built, which were
– well specified
– unique to specialism
– unique to individual lab
– constrained by hardware
– not portable
– driven by enthusiasm with no budget
Then, in the 1990s, commercial systems became available that descended from in house systems. Their characteristics:
– mainly multi specialism
– mainframe based
– written to a simplistic specification awkwardly extended
– quality could be higher
Since 2000 and onward, we see the death of in house systems. The Y2K compliance killed off the last in house systems. Also, a retreat from strategic involvement in specification by the medical labs can be observed, which is to Anands oppinion “a huge mistake”. Another characteristic of the current situation is the commercial lock-in, mainly due to proprietary data formats. In this regard, the medical labs have a lack of power in negotiations, because the main companies simply refuse to port old data, allthough it is a requirement by the labs. And the vednors refuse even if there are no technical constraints. Hence, the buyers have to believe it, because they have no influence in the development process
Anands alarming general statement is that “the quality of software currently used in UK labs is poor, absolutely poor”. The reasons are that there is virtually no competition with 3 different software systems in the UK, and globally not many more. Why are there not more vendors, he asked himself? And answered: There’s no balance of power in the market and the software is hard to specify as it requires detailed knowledge.
The reality is that people at health services invest an awfull lot amount of time to clean up after the software. This situation is paired with the counter-productive philosophy in higher and middle management of health services, that investing in new technologies is only possible when employees are laid off.
In summary, the health services in the UK, maybe even world-wide, and especially the medical labs are currently in the phase of vendor lock-in (sounds familiar to me when thinking of the general history of FOSS). Consequently, Anand started the project “Ganesh” to find a common standard for data interoperability, as well as developing an Open Source reference implementation.
The aims of the project “Ganesh”:
– portability of databases, extracts and records
– global specimen identifiers
– not an obvious idea to medical houses
– vertical processes
– sepcimen centred: log, aliquot, test, refer, report, validate, comment, authorise, store, discard
– horizontal processes
– “back room”, QA, QC, workload measurement, global test QA
– modular extensible
Anand, good luck!
UKUUG: Not Fired for Buying Linux? Quirks of Open Source Adopters' worldviews
My notes and some comments on Andrew Nicolsons presentation Not Fired for Buying Linux? Quirks of Open Source Adopters’ worldviews
Throughout his session, Andrew did not work with slides displayed to the audience. His talk was mainly a caleidoscope of good ideas and criticism.
He started with some questions addressing the audience, the funniest one: “Who wears a dress at his working place?”
Andrew continued analysing who is actually looking at the adoption of FOSS. It’s mainly the media, he said, that presents case studies, articles, interviews, etc.; but, you usually don’t hear about people who did not go for it.
Moving on to discuss the term “computer users”, he doubted the usefulness of this term. He brought up an anology: “Although managers talk a lot, we don’t call them talkers; allthough politicians shake a lot of hands, we don’t call them shakers – but computer users are called computer users because they use the computer.”
Decisions on migrating to Linux, Andrew said, sustaine the myth of rational decision making, which is a masculin approach to decision making. In general, Andrew is a constructivist thinker when he says that a decision is made first, afterwards we piece the evidence together to make our decision defencable.
Software is created socially, in discourses, speech, text and works with conecpts, networks of concepts, theories.
His MBA research is based on some migration examples:
– the city of nottingham that moved to a SuSe email system
– a school that moved to OpenOffice
– Unilever, that decided upon a 5 year plan migrating to linux
– the west yorkshire police
The result is that he detected classical structures of fairy tales and naratives in those migration stories: There’s 1. a problem/crisis, 2. a hero, 3. a solution (linux, the “magic tool”). Andrew explained that the story makes the teller look good and is deeply deep rooted in a traditional conceptional framework/structure. It all comes down to the phrase “we lived happily ever after”.
Having a closer look at the actors in the migration narratives, he enjoyed interviews with employees of the West Yorkshire Police, stating “TV cliches” like “tax payers money”, “I am a responsible police man”.
One more interesting point when looking at the migration stories is that on the one hand, FOSS is presented as something new and different, on the other hand, its similarity is stressed (e.g. between MS Word and OpenOffice).
Andrew adviced the audience to consider that it might not be a good argument that Linux helps companies in saving money, because the power of a manager is bound to the budget of his department. The more money he (can) spends, the more power he has.
At the end of Andrews superb talk, I asked myself, what’s the essence of his statements? Is there nothing new under the sun, even with FOSS, or does it essentially make a difference?
UKUUG: Linux@IBM
Richard J Moore from the IBM Linux Technology Centre, talked about Linux@IBM:
Richard started off with a survey that asked “Based on what you have seen or heard so far with Linux, how would you rate Linux on the following aspects?”. The results (most important on top):
1. Reliability
2. Acquisition Costs
3. Performance
4. Value of Open Source
5. Security
He labeled Linux Kernel 2.6 “a major step in the maturity of Linux”.
Summarizing IBM’s strategy, these are the important points:
– enabling linux hardware, software and services
– partnering with established linux vendors
– participating in the Linux FOSS developers community
– promoting adoption of open standards
Conerning the workload consolidation, IBM sees the following value propositions concerning Linux:
– reduce cost
– use resources more efficiently
– improve performance
– speed deployment
– centralized administration
– daramatically improve TCO
Yes, Linux is also being used inside of IBM to “eliminate OS/2 and Windows servers” (!). Linux runs on about 1100+ xSeries servers and zSeries in the IBM intranet. They do email filtering, web server, etc.
The Linux Technology Center department, where Richard works as RAS architect, focuses on the following areas:
– kernel scalability
– posix threading
– pci hot plug
– etc.
Their work is not architecture-specific. The department employs about 250+ engineers. More information can be found on the LTC Website.
Richard stressed at the end of his talks: “IBM does the upmost to be a good community player”. The succeeding discussion with the panel largely covered patent issues. Giving up IBM software patents would involve a “painfully expensive process”, Richard said. Jon “Maddog” Hall asked him, why IBM does not simple issue a statement saying that they will not use their patents against any FOSS project? Of course, Richard answered that this is nothing he could decide and added that “IBM is a big company that takes long to change its culture”.
Another question from the audience addressed the point why IBM does not offer Linux support for PCs or notebooks? Jon helped Richard answering the question, saying that the IBM Q&A team would have lots of work to make sure that Linux runs on their hardware – even if they choose only some PCs or notebooks. Nevertheless, Jon predicted that “the more linux goes to the dekstop, the more it will be supported, it’s simply a business decision”. Richard followed Jon saying that more significant investment is needed to make Linux widely adoptable for the Desktop, but currently it is still too hard to get a proper return on investment.
Thesis 4: Make Content Networking a Commodity!
Inspired by Jon “Maddog” Hall’s statement at this year’s UKUUG conference that the network is built into Linux I asked myself: why is the network not built into any open source CMS as it is with Linux? Why is it so hard to connect them? Why are they still monolithic blocks of content management?
Obviously, developers of OSS CMS have not yet learned the lesson that Linux tells them: make networking a commodity! Yes, of course, we all do Web services now, yes SOAP or XML-RPC. Yes, there are RSS feeds, trackbacks, pingbacks. Good! A good start, especially in the Weblog community. Unfortunately, the quest for interoperability is as well just at its beginnings. Can we learn from the times when networking was built into Linux? Maybe it’s worth taking a look back at the discussions that evolved in the *nix community.
I will keep an eye on that in the realm of the CONESYS project.
UKUUG: Extreme Linux Programming – A Continuum
Some notes on Jon “Maddog” Hall’s session Extreme Linux Programming – A Continuum.
It was new to me when Jon told us that the Titanic movie used 160 alpha processors with Linux to render the movie. The final rendering took about a year. The producers saved 500 000$ compared to proprietary solutions, a circumstance that Jon commented with: “So the world’s most xpensive movie was half a million dollar cheaper”.
Jon examplified how Linux is used for super computing when finding quarks (physics), doing adaptive control of earthquakes, simulating meteorits crashing New York, mammograms (breast cancer) analysis.
In these cases, Jon said, Linux helps with its cost efficiency, because often people say: “We know how to solve the problem, but we cannot afford to solve it.”, until they see the cost benefits of using Linux for super computing.
Jon drew the following future and past chronologic line, showing “Where does Linux belong?”:
– Beowulfs 1994/1995
– Small-Mid Range Servers 1998
– Embedded Systems 2000
– Commodity based NUMA machines 2003
– Desktop 2003/2004
And the nice thing is, he added, that all of it is based on one set of APIs.
Linux is just perfect for super computing, he said, because “the ntworking is built in” and “parallelism screems at you”. With Linux, you have parallelism even in single-cpu machines where it cuts down on I/O wait time and keeps memory and cache “warmer”.
The investment protection that Linux offers to super computing implementation, are based on the:
– standard operating system
– standard architectures
– standard programming techniques
inherent to Linux.
Oh, and I learned a new acronym: RAS = Reliability/Availability/Scalability.
UKUUG: I'm there
Just arrived at UKUUG Linux 2003 conference in Edinburgh. I intend to blog some of the sessions I will attend. The first one will be Jon’s talk. So keep coming back to my blog because I will add some reports from time to time, until the conference closes on Sunday.
Upon registration, I of course received the famous conference bag with many sponsor ads. To my suprise, I also found a printed copy of a Samba 3.0 How-To. Good idea, thanks!
Fortunately, I had some time to do some sight-seeing in Edinburgh yesterday, which is – of course – a great city, as I have now been able to experience on my own.
As far as I have seen, the UKUUG conference Web site does not provide trackback links for each session – maybe next time for the winter conference? Then bloggers could reference single sessions. Apropos conferences and trackbacks: O’Reilly’s OSCON Web site provided trackback links, so maybe it will become a common place soon for any conference Web site – which would really make sense.
The Visibility of Oppinion Changes
During the recent 2 weeks, a heated debate evolved in the blogger community: Mark Pilgrim set up the “Winer Watcher”, a blog that mirrors Dave Winer’s blog entries every 5 minutes using Dave’s RSS feed, makes a diff on the feeds to indicate changes made by Dave to his postings over time, and shows the changes in chronological order. Dave Winer urged Mark Pilgrim to stop his service and Mark (still?) shut out the public from the Winer Watcher with pasword protection. Read more about this debate and some legal implications in the comment by Karl-Friedrich Lenz.
Now, what’s my oppinion? I am very much in favour of transparency, in private as well as in the public. Transparency is one precondition for mutual trust. If you change your oppinion, make it transparent so that everyone can understand what and why you are doing it.
Dave Winer argued that Mark Pilgrim takes away Dave’s right to change his mind by setting up the Winer Watcher. Ok, an application, solely meant to track changes of one individual’s thoughts, must feel awful for the person affected. But Dave is a “prominent” blogger who – from Marks viewpoint – seems to behave badly as far as transparency is concerned. The conflict between both of them emerged because Dave did not cease to change his blog entries and denying things he had written before. At least, this is what the pro-Mark fraction thinks.
I personally do not understand, why Dave acts that aggressively. Writing in the public, especially in a medium like the Internet, means that many people can aggregate and cachy your texts – even before RSS feeds emerged. Searching Google for my name, will bring up many Web sites being more than 7 years old and they show some parts of my personality. And yes, I have changed in those 7 years and my oppinions as well. Clever minds will understand when comparing those old Websites in a chronological order. So what?
People publishing printed articles and books, cannot later take them out of any library in the world just because they published a new version of their document where their oppinions changed. Writers like the philosopher Wittgenstein completely changed their oppinion during their life – and that’s what is so interesting about his thinking.
Webloggers publish tiny articles everyday and up to know, they could change the content without advertising the changes in a diff as Wikis do for example. I am in favour of the visibilty of oppinion changes and it does not matter if they are processed with computational means like a diff between RSS feeds, or in a scholar’s mind comparing Wittgensteins two contrary books. The only thing everyone has to decide on his own is: how much of my thinking do I want to be public and how much do I want to keep private? In any case, you should be transparent in what you are doing, otherwise it is unlikely that people will trust you.
