2010-08-16

Life after life of Google Wave

It is a little sad, but Google ends development of Wave product as "Wave has not seen the user adoption we would have liked".

I liked the whole "Wave" idea presented at Google I/O conference. It was new an powerful tool, covering the features of multiple existing services like instant messengers, chats, forums with threads and more. On the other side, people were scared of initial learning curve, even understanding at the start what is this service for. Compare other successful service Twitter that wins with simple question "what’s happening now" and message box for simple text message. It seams that simplicity of use and ascetic features win in that case.

Wave has unique features like live collaborative editing that I like the most. This is perfect tool for quickly writing documents by collaborating team. Google Wave is open source project ,so I hope such features would be incorporated into on-line editing suites, in a way that makes collaboration almost effortless. It should also take its place on some forum based sites.

If you miss that service, look for Google Wave offspring or make your own service- the code is publicly available.

2010-07-01

Software development platforms wars as a movie theme

Summer break make me think about funny side of life.

Check this parody "Java 4 Ever" movie trailer, set in reality of dominance and war between development platforms .



Relax and consider that development platform is only a one of choices for a bigger purpose.

2010-06-15

Funny work story - Slippery slope

Although story has a software engineering context, the situation described there seems common for almost all kind of workplaces.
btw. I like the style of www.dadhacker.com posts.

2010-05-31

Requirements management story

The development of simple specific project management system was set in motion. There were couple of meetings and email "conversations" about requirements and use cases scenarios. There was one administration like system role valid for period of time when system "project" was running. I asked client few times about replacing user of that role in case of illness days off etc.

It is not important now .....

The answers were: it is not important part of story, we can wait for finishing project business if there is enough time or substitute someone in place using the same system user, and finally there is only one administrative user for single "system project". The message was clear: it is not important, keep going, we can handle situation. The application was simple so it was looking like cutting the corners on unused features.

... but I cannot accept that ...

The story was finished till final acceptance tests. One of the users from client side playing such system "administrative role", get few days off. The other administrative user asks now for guidance because "there was something in manual about replacing users of other role in system projects". After reviewing quick "user doable" solutions he had serious objectives about completeness of the system. So either detailed "walk through" procedure will be enough or new feature have to be added to application functionality as a part of service agreement.

The role of Final user adoption

It's story about little detail but it shows common scenario that very often takes place during software development.
During requirements gathering the project team was planning common usage and sanely limited set of functionalities.
Every user thinks in a different way. Some will understand used functionality set, other will need detailed procedures for every specific situation.
Final user adoption is important thing. That is one of the reasons why so many classic waterfall project models fail. There are also stories about complete systems, agreed on paper but unused or misused by users that just didn't like it.

2010-05-26

Spring cleaning time - making room for new activities

The Spring is very worky time for me. I'm not talking about casual house cleaning (I used to do it fast), but mainly about my work context. It's time to finish started projects and make room for new ones and for incoming summer activities. Weather in May is really nice in my place - so I devote some time to biking and long walks. I really didn't want to start another tasks, like writing blog posts about something abstract enough (I still have "blog vs privacy" issues). So please forgive me for long break without posts.

I'm thinking also about blog character- it's now mixed personal/technical blog. I like reading entertaining personal blogs, but that is not a point. It seems people like more specialized thematic blogs, so maybe I should go that way. I don't want to fight for readership with every possible mean, but more viewers means more potential interesting contacts for me.

Yes, I'm talking about you dear reader. So if you are interested in my subjects, or have an project/business idea write me itprolife@gmail.com.

I'm inviting commenters too.

2010-03-31

Good practices for backups on DVD

DVDs are good and cheap media for middle term backups of old files, or periodical snapshots that fit into 4.37 GiB space. Best example of mine is storing old photos.

Choose you disc

Choose good quality discs from known brand/manufacturer relying on reliability statistics

Choose DVD+R over DVD-R - +R uses better writing method and has better error correction, check this article for long explanation. Do not use rewritable discs that use erasable data layer, that could be more easily damaged.

Burning speed - the lower the best - higher speed means potentially more errors to correct during writing and reading. Most hardcore safe backup setting will be x1 , but 4x or 8x should be quite safe when today DVD writers give you x16 writing speed.

Handling

DVDs are fragile to physical damages, specially vulnerable to scratches. So discs should be handled with care, without touching recorded surface. Discs surface should be clean to avoid scratching by hard dust particles. Blank discs should be perfect clean before writing, as any speck on surface will block laser beam.

Do not place adhesive labels and use special CD/DVD markers for labeling. The best place for tiny label is not recordable small middle ring. Why is that? Label side of CD or DVD is separated only by thin layer, that chemicals could easy get through and damage data layer with recorded pits.

Recorder DVDs should be stored in dark and dry place. Here is nice list of DVD handling recommendations on a NIST page.

Additional backup safety measures

Redundancy increases probability of data retrieval. For critical data it makes sense to make more copies. Other copy could be placed off-site, for better protection in fire and flood proof place.

Another option would be additional error correction data created by ECC software like dvdisaster. The best option is to write additional ECC data scattered on the same disc with data. It's a trade-off of disc capacity for additional data safety.

Every media has limited life span even stored in best conditions. Some manufacturers give even 30 years (or even 100 years!) for DVD discs, but I'm not such optimistic. Stored archives should be periodically checked for errors (dvdisaster has that functionality), and then moved to new media. It makes also sense in case of technology change and migration. Today next popular optical format is Blu-Ray Disc, but its still young.

2010-03-25

Software development project size and methodology

Here is a short description of software development characteristics for projects of different size and its impact for methodology.

Very simple application (single programmer, tester - one man for all)

Requirements are simple and all known. If you are writing very simple application, using good architecture plan and code writing practices, as a result you should have clean ready to change application. Adding automated tests makes changes easier without software quality drop. You know all details without looking into documentation, and testing new changes and features during implementation. All changes go to the code repository with comments and TODO file.

Simple/medium, application (team of 1+ programmers and other stakeholders)

The requirements should be gathered and agreed before development. You are using some kind of Issue Management System, tracing all change requests. Application is divided into main modules referred in task details. Programmers know what to change and what are dependencies between modules and side effects of implemented changes. Testers know main modules, and are going through test procedures of modules and functionalities. Procedures are short and clean and there is a natural place for agile methodologies.

Large application, (team of 10+ programmers and stakeholders)

Application is quite huge. Preparing requirements is often separate project. Every change request is thoroughly reviewed then approved. Application have many interdependencies caused by re-usability and connection points. Some changes have strictly local effects for given feature, those are less harmful. Other change some lower level service, probably unit tested, but still having impact on bigger process. Automated tests are not covering every possible use of code, manual top level functional tests are last line of defense in quality assurance. In real life every module has many connections with other modules. Testers have very limited knowledge about those dependencies. Everyone needs more plans, documents, procedures and artifacts that is slowing down the whole process. Practical solution is to divide the large project into smaller, less complex sub-projects.

At this level of complexity there are many "process complete" management methodologies. The winning one is the best suitable for given project, reducing risks and allowing to finish project at planned costs and time.

2010-03-20

Plan for data backup and recovery

I've started writing about data backup and recovery as a necessity in a "digital era". To make it simple and successful it needs some upfront planning.
I've brought together most important issues about safe data backup and recovery. No matter if you have you PC hard disk full of pictures or have lots of business data on workstation, you should consider following subjects:


  • Why do you need to make backups? (read Backup your important data article)

  • Categorize your data - easy manageable big chunks divided by type/importance

  • Plan for recovery - what is saving data worth if you are unable getting it back?

  • Consider data retention - time horizon (short vs long term backups)

  • Choose storage media/place - what suits your needs

  • Simplify and automate backup procedure - use specialized software or scripts to keep your backup plan going with minimum work from your side

  • Monitor condition and manage backup copies - think about change if process doesn't meet your needs



Maybe it seems dull and hard to implement, but everyone should cut it to their needs.
It's all about time and costs, but having saved your data is sometimes priceless.

2010-03-15

Database related general application performance tips

Most of the serious applications nowadays use some kind of relational database engine. Database becomes main data source and data processing module in application architecture. That's the reason why it's common application performance bottleneck and main scalability factor.

Having design and data flow requirements its possible to prepare database system at the system planning and implementation phase. Don't get to much into performance details at that stage because "premature optimization is the root of the evil". Sometimes system requirements and design meets reality and final expectations, but changes in requirements that could change application data flow are more common. The most accurate results are coming from real data and users usage patterns, so getting system responsiveness metrics is crucial even at the early prototype stages.

Having such data about performance "weak points" is a start point to optimize and improve overall system scalability. It's good to begin from top more abstract layers in application architecture before rushing to lower level database storage tweaking.

There are some tips divided by subsystem scopes and ordered by top to bottom abstraction level:

Application scope
  • discuss with project stakeholders responsiveness requirements for various application functionalities
  • analyze data usage patterns like writes vs reads, common data usage vs casual reporting, written once or changed often etc.- it gives image what could be improved, and what kind of underlying database mechanism you will need
  • remove worst bottleneck (having biggest performance impact) first to get best results
  • use cache for most used data (reads and writes if its possible)
  • design transactions smart- long transactions cause performance problems
  • at first you should use normalized data schema, but there are situations where little denormalization is crucial for good performance (f.e. copying some data to most read table to eliminate need for joining other big tables)

Database system scope
  • use indexes where it works best (every index adds performance penalty for data writes)
  • use vertical and horizontal data partitioning - move less used data into other tables or use database engine specific features
  • configure database and use its features like special table types, special indexes for your best

Operating system scope:
  • use database dedicated host or performance cluster - for large scale systems
  • check network latency and throughput for large data transmissions
  • tune underlying disks structure and file system- separate partitions or disks for database files, use low overhead file system or custom database "raw" partitions (like in Oracle DB)

2010-03-10

The meaning of "realistic" in FPS games

Grog News wrote long and interesting article about reality of guns, gears and tactics in FPS games. Even games most rated as "realistic" have large discrepancies vs reality.

Although gaming industry considers fun more important over realistic features, new FPS games are incorporating more complicated graphics, physics and AI engines, to raise playing experience to the next level.

Games having good balanced reality and fun, are tough enough to keep interest of players for a long time (like Counter Strike type games). Too much realistic constraints in game are resulting in too hard and annoying experience (f.e. player can't jump 20 meters down, or can't run long distance keeping constant high speed). It's an entertainment industry so usually more fun means more profit.

I have to admit that even simple change in details pointed in Grog News article, could improve both fun and reality. It doesn't even had to be some kind of new fancy physics engine. Let's take for example weapons and ammunition/magazine compatibility issues or bunny jumping tactic to avoid shots. I really would like to have features like ricochets effects, but there is a field to improve simple mechanisms first.

There is a bright future for "reality simulating" games, these are just slow evolving.

2010-03-06

Backup your important data

Organizations are collecting more and more data on their workstations/servers. They are doing also regular data backups to prevent loss of assets, time and money. Are you doing data backups? Are you ready to lose your data?

Simplest hard disk drive failures are bad-blocks affecting single files. If file is important there is good chance that it can be recovered using proprietary tool. More destructive could be malware erasing or damaging random files, or user hitting "delete all" button on wrong marked files.

Hard disk drives aren't perfect and there is quite large probability that you hard disk will die. The last resort then are data recovery companies, but the cost starts from about thousand of dollars and more.

What about laptop theft or lost external drive? There are events that could physicaly destroy your hardware, like fire f.e. Then probably all is gone.

Backup your important data. In best case you will save your time or money. In worst you will prevent of loss of valuable data.

2010-03-03

Little change in the blog title

Choosing good blog title could be difficult art. I have chosen "IT Pro Life" that seemed good so far. Maybe it is too pompous and enigmatic, but I wanted something simple and short, that defines character of that site.

When I started analyzing keywords I realized that many times my content is in "pro-life" context. It seems people may be confused finding totally unrelated content
("what? pro-life? It looks like no life!").
My bad, I didn't seen that at the start.

I'm doing little change in title to IT Pr0 Life, using slang word for "professional" that is: pr0 (PR zero). It will look more funny, but there is place for sense of humor too :)

I keep domain name untouched, I will see how it works with search engines.

2010-02-11

Registering new blog domain on Technorati

I have posted the magic Technorati code to claim my new domain.
At Technorati support section it is written that posting that code is best sure method to prove blog ownership.
I think there should be other good (maybe more professional) methods. They made it in that way ... let's say simpler for 'non-tech' blogger.

2010-02-08

Brand server machine means no cheap storage space

I've been looking for server machine, with lots of storage space. It's purposed to be specialized Intra-net web server, running some PHP applications, with constantly growing content. It should last at least for 5 years, and low failure rate is a issue, so CTO considers only branded and supported machines.

Funds (about 5000$) are suitable only for one decent machine (entry level) or two "outdated" at "sell" price ones. I'm even not mentioning specialized storage server with brand because of a tight budget. Quick comparison shows that is better to get one better server with quad core Intel Xeon processor, if not load balancing but peak performance is a goal.

Second important goal is storage space. Most vendors sell servers only with their own branded and expensive hard disk drives. It is really annoying, because they offer disks that are 2 times smaller (best like 500GB) and 2 time more expensive that analog hardware from Seagate or Western Digital.
That makes decision hard. It's hard to cut the price down.

There was an option to buy disk-less machine, get disk frames from 3rd party chinese vendor and put hard drives of your choice, but everything without warranty. After reviewing ready configurations, I have to make a compromise.

I've bought a server with minimal acceptable storage capacity, with option for future expansion of both hard disk drives and RAM. In future there should be low funds for upgrade and disks should be cheaper, so it should work.

Moral of the story: If you want good and cheap machine there is always compromise between those two assumptions.

2010-01-25

Winter time - stress test for traffic congestion and car condition

During winter I witnessed many traffic jam or just slow traffic problems. Driving takes sometimes 4 times longer than in average conditions. Sometimes slowing down below some level in just few critical points is enough to clog up the roads in the whole city. I see some similarities with IT systems during stress load.

Making new roads is not real life solution for "fix it now" traffic situations. More appropriate is holistic analysis of the system performance and system tuning, like smart adjusting traffic lights to actual congestion etc. Some car drivers are responsible and try to help everybody moving in. But most of cars are moving like flowing water stream, filling every empty space in shortest time. So kind of central traffic steering makes sense, and is subject of scientific research and real life implementations.

Another situation. I haven't problems with my car during winter till -10 centigrades. But few more freezing days made little inefficiencies in car systems stack up and disabled my car for few days. Something like total performance degradation in IT system while stress test.

It's good to have backup plan for more harsh "freezy" days.