When your CD or DVD (disc) drive starts giving you problems, your first thought may be to replace it or take it to the repair shop, but a good cleaning may be all it needs.
Below are three methods to clean the disc drive. The easiest method is the least effective. The hardest method is the most effective. Since the hardest method takes some time to do, I recommend that you start with the easiest method. If it solves your problems, congratulations. If not, try the next method.
The Cleaner Disc method - this, the easiest method, uses a special cleaner disc which can be purchased in computer stores. The disc usually comes with a little bottle of cleaner solution. Apply a few drops of the solution to the disc and insert it in the disc drawer (be sure to read and follow the instructions that come with the cleaner disc). The drive will turn the disc and clean the lens. Unfortunately, this only works adequately about half the time.
The Cleaning Stick method - this is what I do in desperation when the cleaner disc does not work and I don't want to disassemble the drive. Since all that is needed (at least in my mind this is true) is a little more pressure applied to the lens, I start out in search of a thin, flexible stick of some type which is at least six inches long. It should not have sharp or rough edges that would scratch the lens. Next, get a soft, thin cloth and put water or rubbing alcohol in the middle of it. Place one end of the stick under the wet part of the cloth and slide it into the opened disc drawer. The goal is to rub the wet cloth on the lens to clean it. Do not apply so much pressure that you will scratch and ruin the lens. Also try blowing into the disc drive to remove any dirt that may have accommulated in it. If you do not succeed at this, proceed to the next method.
The Disassembly method - this method should work but it requires you to disassemble the drive. So if you are not comfortable with taking the drive apart, please take it to a computer repair shop and let them do it.
Take the cover off your computer, unplug the cords from the back of the disc drive, remove any screws holding it in, and slide it out (you may need to remove the face plate on the end of the drawer to get the drive out). Remove the screws in the drive housing and take the cover off. The bottom side of the drive is a circuit board, so if that is what you see when you take the cover off, figure out how to access the other side. On the correct side, you should see a lens that runs on a track (there is no harm in moving the lens along the track but do not touch the lens itself). Use a wet, soft cloth to clean the lens.
Sometimes a disc drive malfunctions because there is too much dust or debris in it, so be sure to clean out the inside with either compressed air, a soft cloth, or a cotton swab. Reassemble the disc drive, put it back in the computer case, and cross your fingers. Hopefully, it will work when you turn on the computer.
If these methods work, you just saved yourself some money. If not, you needed a better disc drive anyway.

Custom Search
Tuesday, December 9, 2008
Website Sales Purpose
When designing a website, it is important that webmasters ask some general questions before they begin the design process...
What Is The Purpose Of Your Website?
Many companies use websites to establish their brand. Others use websites as a communication tool. Some companies see websites as sales vehicles and "billboards". Still others use their website as an educational tool. And some may be any combination of the above. The website must have a purpose in order for it to be effective.
What Is It That You Are Trying To Accomplish With The Website?
A strong understanding of the website will allow a webmaster to emphasize the action they want the website visitor to take on the website. By defining and understanding the purpose of the website, webmasters and publishers can better structure the information on the website. Information can be provided with the appropriate emphasis and navigation. An ideal website will lead the web visitor to take the action the webmaster wants.
Who Is Your Audience?
You must identify and understand your target audience. Understanding your demographic will allow you to cater content specific to that group.
What Are The Objectives Of The Website?
You also need to determine what the objective of your website is. What are you attempting to accomplish? Are you trying to sell something? Are you looking for downloads, or is sales your real objective? Is your website trying to promote a specific product or service? Do you want your visitors to take a specific action? Is the intent to profit from ad space in general or to have website visitor's click on specific ads? Are you trying to build a brand? Do you want visitors to purchase a product, or provide an email address?
When attempting to solicit a specific action, there are some general guidelines that you should follow. Your website should be designed to solicit the action you desire, so the navigation should intuitively lead the visitor to take the desired action. If clicking a link is the goal, then that link should be clearly indicated and prominent on the page. This will not only help insure that the maximum number of visitors will be able to adequately view and navigate your content, but it will also help prompt those visitors to take the action you wish to have occur.
For example: Many software companies struggle with the action they wish to solicit from the website visitor. Software companies and eBook publishers are often guilty of pushing users to download, at the expense of the actual sale. Some companies prefer to have users download prior to making a purchase decision, while others lose impulse purchasers by only pushing the download rather than the sale.
In Order To Maximize The Websites Sales Purpose And Objectives, Follow These Simple Steps...
Address Compatibility Issues
If a website visitor is unable to view the website's content, they are obviously going to be unable to complete the desired action. The compatibility issues could be related to technology or usability. Avoid using technologies that require the website visitor to download a plug-in before they can view the website content. If providing content using flash is important to you, you should also provide a flash-free version as well. Also, do not alienate website visitors who might have a disability -- use proper web construct, provide alt tags for images, and avoid using a color scheme that will cause confusion.
Define A Clear Navigation Path
A website's navigation should provide the visitor with a clear path. Information architecture is the organization and categorization of online content -- the process of creating clarity and organizing online information in a purposeful, and logical way. Prioritize and emphasize the most important items on the website. Give visitors a clear path to what they are seeking. Each and every page should intuitively provide them links to additional information and purchase options.
Minimize Distractions
Minimize choices and other website distractions. Website visitors should be provided a clear path of action. Do not provide the website visitor an abundance of choices -- studies show that a large number of choices often puts the consumer off. It is generally recommended that you provide no more than 3 choices. Keep your message concise and on-topic. Website visitors will often just scan a webpage rather than reading it, so bulleted lists and headlines might be used to emphasize your message.
It may sound like a cliche, but it's the little things that can make the biggest difference. Pay attention to all aspects of your website. Defining the specific website objectives and purpose will help to encourage the desired action or behavior from your website visitors.
What Is The Purpose Of Your Website?
Many companies use websites to establish their brand. Others use websites as a communication tool. Some companies see websites as sales vehicles and "billboards". Still others use their website as an educational tool. And some may be any combination of the above. The website must have a purpose in order for it to be effective.
What Is It That You Are Trying To Accomplish With The Website?
A strong understanding of the website will allow a webmaster to emphasize the action they want the website visitor to take on the website. By defining and understanding the purpose of the website, webmasters and publishers can better structure the information on the website. Information can be provided with the appropriate emphasis and navigation. An ideal website will lead the web visitor to take the action the webmaster wants.
Who Is Your Audience?
You must identify and understand your target audience. Understanding your demographic will allow you to cater content specific to that group.
What Are The Objectives Of The Website?
You also need to determine what the objective of your website is. What are you attempting to accomplish? Are you trying to sell something? Are you looking for downloads, or is sales your real objective? Is your website trying to promote a specific product or service? Do you want your visitors to take a specific action? Is the intent to profit from ad space in general or to have website visitor's click on specific ads? Are you trying to build a brand? Do you want visitors to purchase a product, or provide an email address?
When attempting to solicit a specific action, there are some general guidelines that you should follow. Your website should be designed to solicit the action you desire, so the navigation should intuitively lead the visitor to take the desired action. If clicking a link is the goal, then that link should be clearly indicated and prominent on the page. This will not only help insure that the maximum number of visitors will be able to adequately view and navigate your content, but it will also help prompt those visitors to take the action you wish to have occur.
For example: Many software companies struggle with the action they wish to solicit from the website visitor. Software companies and eBook publishers are often guilty of pushing users to download, at the expense of the actual sale. Some companies prefer to have users download prior to making a purchase decision, while others lose impulse purchasers by only pushing the download rather than the sale.
In Order To Maximize The Websites Sales Purpose And Objectives, Follow These Simple Steps...
Address Compatibility Issues
If a website visitor is unable to view the website's content, they are obviously going to be unable to complete the desired action. The compatibility issues could be related to technology or usability. Avoid using technologies that require the website visitor to download a plug-in before they can view the website content. If providing content using flash is important to you, you should also provide a flash-free version as well. Also, do not alienate website visitors who might have a disability -- use proper web construct, provide alt tags for images, and avoid using a color scheme that will cause confusion.
Define A Clear Navigation Path
A website's navigation should provide the visitor with a clear path. Information architecture is the organization and categorization of online content -- the process of creating clarity and organizing online information in a purposeful, and logical way. Prioritize and emphasize the most important items on the website. Give visitors a clear path to what they are seeking. Each and every page should intuitively provide them links to additional information and purchase options.
Minimize Distractions
Minimize choices and other website distractions. Website visitors should be provided a clear path of action. Do not provide the website visitor an abundance of choices -- studies show that a large number of choices often puts the consumer off. It is generally recommended that you provide no more than 3 choices. Keep your message concise and on-topic. Website visitors will often just scan a webpage rather than reading it, so bulleted lists and headlines might be used to emphasize your message.
It may sound like a cliche, but it's the little things that can make the biggest difference. Pay attention to all aspects of your website. Defining the specific website objectives and purpose will help to encourage the desired action or behavior from your website visitors.
Enabling High - Quality C/C++ Software, Automatically-Coverity Prevent
What Is It?
Coverity Prevent SQS™ is the market-leading automated approach to identify and resolve the most critical defects in C, C++, and Java source code. By providing a complete understanding of your build environment, source code, and development process, Prevent SQS sets the standard in enabling high-quality software across organizations worldwide.
Prevent SQS for C/C++ automatically analyzes large, complex C and C++ code bases and detects critical, must-fix defects that could lead to system crashes, memory corruption, security vulnerabilities, unpredictable behavior, and performance degradation.
Prevent SQS features:
• 100% path coverage: Prevent SQS for C/C++ analyzes 100% of the paths through your source code, ensuring that all possible execution branches are followed, while avoiding impossible paths to maintain fast execution.
• Low false positive rate: Prevent SQS for C/C++ maintains a very low false positive rate , ensuring that developers’ time spent inspecting defects will result in noticeable quality improvements.
• Highly scalable: Prevent SQS for C/C++ analyzes millions of lines of code in a matter of hours, easily integrating into your regular build process with little or no additional hardware and no disruption to your development process.
What Makes It Great?
Unlike other C/C++ analysis tools that focus on programming style and syntax-based checks, Prevent SQS for C/C++ performs deep, interprocedural analysis to uncover the critical, must-fix defects that matter most to developers. Prevent SQS for C/C++ leverages multiple analysis engines to uncover hard-to-find defects including:
• Path Flow Engine understands the control flow through each function in your code base, allowing Prevent SQS to analyze 100% of the paths through your code.
• Statistical Engine tracks behavioral patterns throughout your entire code base, allowing Prevent SQS to infer correct behavior based on previously observed behavior.
• Interprocedural Summary Engine enables Prevent SQS to perform a whole program analysis of complex call chains at any depth across files and modules in a form that is most similar to the eventual executing Binary. This result in the highest-fidelity results available.
• False Path Engine solves each branch condition to determine if it will be true, false, or unknown on the current path. This allows Prevent SQS to efficiently remove obvious false positives from the set of defects reported.
A sample of the critical defects reported by Prevent SQS for C/C++ include:
Concurrency Issues
• Double locks, missing locks.
• Locks acquired in incorrect order.
• Locks held by blocking functions.
Memory Corruption and
Mismanagement
• Resource leaks.
• Calls to freeing functions using invalid arguments.
• Excessive stack use in memory constrained systems.
Crash-causing pointer errors
• Dereference of null pointers.
• Failure to check for null return values.
• Misuse of data contained within wrapper data types.
C++ Specific Errors
• Misuse of STL iterators.
• Failure to de-allocate memory by destructors.
• Incorrect override of virtual functions.
• Uncaught exceptions.
Window/COM Specific Errors
• Incorrect memory allocation with COM interfaces.
• Incorrect type conversions.
Security Vulnerabilities
• Buffer overruns.
• SQL injection.
• Cross-site scripting.
• Integer overflows.
About Coverity
Coverity (http://www.coverity.com) is the market leader in improving software quality and security. Coverity’s groundbreaking technology automates the approach to identifying and resolving critical defects and security vulnerabilities in C/C++ and Java source code. More than 300 leading companies have chosen Coverity Prevent SQS because it scales to tens of millions of lines of code, has the lowest false positive rate in the industry and provides total path coverage. Companies like Ericsson, HP, Samsung, EMC, and Symantec work with Coverity to eliminate security and quality defects from their mission-critical systems.
Coverity also has customers like Symbian, RIM (Blackberry), Juniper networks, Cisco, Texas instruments and is also used by the Department of Homeland security to scan lots of open source projects.
Free trial
Coverity offers a free trial of Prevent SQS that will detect a wide range of crash-causing defects in your code base within hours. No changes to your code are necessary, there are no limitations on code size, and you will receive a complimentary report detailing actionable analysis results. Register for the on-site evaluation at: http://www.coverity.com .
Coverity Prevent SQS™ is the market-leading automated approach to identify and resolve the most critical defects in C, C++, and Java source code. By providing a complete understanding of your build environment, source code, and development process, Prevent SQS sets the standard in enabling high-quality software across organizations worldwide.
Prevent SQS for C/C++ automatically analyzes large, complex C and C++ code bases and detects critical, must-fix defects that could lead to system crashes, memory corruption, security vulnerabilities, unpredictable behavior, and performance degradation.
Prevent SQS features:
• 100% path coverage: Prevent SQS for C/C++ analyzes 100% of the paths through your source code, ensuring that all possible execution branches are followed, while avoiding impossible paths to maintain fast execution.
• Low false positive rate: Prevent SQS for C/C++ maintains a very low false positive rate , ensuring that developers’ time spent inspecting defects will result in noticeable quality improvements.
• Highly scalable: Prevent SQS for C/C++ analyzes millions of lines of code in a matter of hours, easily integrating into your regular build process with little or no additional hardware and no disruption to your development process.
What Makes It Great?
Unlike other C/C++ analysis tools that focus on programming style and syntax-based checks, Prevent SQS for C/C++ performs deep, interprocedural analysis to uncover the critical, must-fix defects that matter most to developers. Prevent SQS for C/C++ leverages multiple analysis engines to uncover hard-to-find defects including:
• Path Flow Engine understands the control flow through each function in your code base, allowing Prevent SQS to analyze 100% of the paths through your code.
• Statistical Engine tracks behavioral patterns throughout your entire code base, allowing Prevent SQS to infer correct behavior based on previously observed behavior.
• Interprocedural Summary Engine enables Prevent SQS to perform a whole program analysis of complex call chains at any depth across files and modules in a form that is most similar to the eventual executing Binary. This result in the highest-fidelity results available.
• False Path Engine solves each branch condition to determine if it will be true, false, or unknown on the current path. This allows Prevent SQS to efficiently remove obvious false positives from the set of defects reported.
A sample of the critical defects reported by Prevent SQS for C/C++ include:
Concurrency Issues
• Double locks, missing locks.
• Locks acquired in incorrect order.
• Locks held by blocking functions.
Memory Corruption and
Mismanagement
• Resource leaks.
• Calls to freeing functions using invalid arguments.
• Excessive stack use in memory constrained systems.
Crash-causing pointer errors
• Dereference of null pointers.
• Failure to check for null return values.
• Misuse of data contained within wrapper data types.
C++ Specific Errors
• Misuse of STL iterators.
• Failure to de-allocate memory by destructors.
• Incorrect override of virtual functions.
• Uncaught exceptions.
Window/COM Specific Errors
• Incorrect memory allocation with COM interfaces.
• Incorrect type conversions.
Security Vulnerabilities
• Buffer overruns.
• SQL injection.
• Cross-site scripting.
• Integer overflows.
About Coverity
Coverity (http://www.coverity.com) is the market leader in improving software quality and security. Coverity’s groundbreaking technology automates the approach to identifying and resolving critical defects and security vulnerabilities in C/C++ and Java source code. More than 300 leading companies have chosen Coverity Prevent SQS because it scales to tens of millions of lines of code, has the lowest false positive rate in the industry and provides total path coverage. Companies like Ericsson, HP, Samsung, EMC, and Symantec work with Coverity to eliminate security and quality defects from their mission-critical systems.
Coverity also has customers like Symbian, RIM (Blackberry), Juniper networks, Cisco, Texas instruments and is also used by the Department of Homeland security to scan lots of open source projects.
Free trial
Coverity offers a free trial of Prevent SQS that will detect a wide range of crash-causing defects in your code base within hours. No changes to your code are necessary, there are no limitations on code size, and you will receive a complimentary report detailing actionable analysis results. Register for the on-site evaluation at: http://www.coverity.com .
Mentoring In IT
This article is also available as a "The Sniffer Guy" podcast on iTunes.
ATTENTION AMERICAN IT MANAGERS: Within the next decade most of your best people will retire or die. Your senior staffers are baby boomers with twenty years or more of experience in their field. They built the systems: they learned the operating systems as they were created: they know what they know from real life experience that cannot be learned in school. They are also somewhere between their late forties and early sixties. They rose to the top while competing within the largest workforce America has ever seen. When they leave they will take a level of efficiency and expertise with them that will take twenties years to replace.
To make matters worse, the population of appropriately educated Americans coming up behind them is far smaller than the population getting ready to move out. Do the math. Start now. To try to buy that talent later will not only cost you a fortune but you will be competing for a very small population of such individuals, with the entire world.
In this corporate environment where everyone is disposable and so much work is done by contractors so that companies can avoid having to make a commitment to personnel, it is far too easy to miss this growing danger. Business Managers may have learned this—but probably not. However, IT managers usually have not had much exposure to this concept. They live in a world of projects that staff up for the project and then disband. How to you bring up the next crop of leaders in such an environment? This is going to take far too many companies by surprise! But IT is going to be hit much harder than most other departments. I know of no other area of corporate life that is so project oriented. In the IT world, you build a team and disband it a few months later--even when they do outstanding work. All in the name of avoiding long term cost. It also avoids long term success.
There is also an emotional and psychological component to this problem. After the Dot-Bomb debacle, many people with decades of smarts were kicked out due to layoffs, or companies failing, or being eaten by a bigger company that had its own staff. Why do we eat our seed corn? Those that survived are still concerned about it happening again. And it could. It makes them conservative. Possibly even a tiny bit timid and less likely to share their knowledge freely. I don't blame them. Because our corporate mentality is to cut the expensive people and replace them with contractors—that we can easily get rid of when the job is done. What a vote of no confidence! This is considered to be a strategy. OK…I will accept that. It is a strategy. But it is a very short sighted one. Where is the mid-range planning?
So, what do you do? In this article and podcast, let us restrict our focus to Mentoring. Future articles and podcasts will explore other activities that are also proven and available.
Do you have a mentoring plan in place? I don't mean the typical, "oh, we believe in mentoring around here" kind of plan. I mean a thought out purposeful plan whereby you determine which journeyman IT personnel have the potential to grow into those senior roles and have your baby boomer senior staffers truly mentor them to bring them along. I doubt it. It does exist; I know of a few such companies. But it is rare.
Part of the problem for those that want to create a mentoring program is that it is not so simple to identify candidates. Let me help with that. Not everyone is a candidate for mentoring and few people are cut out to be mentors. It's sad but true. Don't spin your wheels and exhaust your enthusiasm backing the wrong plan and/or individuals. You need to have some way to identify in whom you want to invest. And, please understand, it is an investment. You will invest money but not only money. You will invest the time of very busy and critical people. That will hurt a bit—but you don't really have any choice. If you are responsible for future planning in your organization, ignoring this process is irresponsible.
Here is a handy way to help make these determinations. A friend once told me that he had learned in a sales course at IBM, decades ago, about a concept that went something like this—and I may be mangling it so please forgive me. It was not meant for IT or Mentorship purposes, but I have adapted it.
There are four levels of competence. They are listed in order from least capable to most capable in performing their job. Oddly, this does not represent the order in which they are most effective in a mentoring program.
- Unconsciously Incompetent
- Consciously Incompetent
- Consciously Competent
- Unconsciously Competent
UNCONSCIOUSLY INCOMPETENT: This person doesn't know that they don't know. They are not a candidate for this program—but may need help in learning to learn.
A famous story about Thomas Edison says that he used to test fresh new Engineers who wanted to work for him by putting them in a lab with a very unique and oddly shaped glass container. He would tell them to figure out the internal volume of the container. One time he watched a new graduate work out the problem by measuring all the diameters of the odd twists and turns of the glass and carefully making the calculations on his slide rule. When he presented the answer, Edison said, "You got the right answer, but I can't give you the job." The young man asked why and Edison responded by picking up the container, filling it with water and pouring into a graduated beaker, getting the answer in ten seconds. He said, "Son, I am glad you know the answer, but I'm afraid you just don't know the question." The Unconsciously Incompetent person does not know the question.
CONSCIOUSLY INCOMPETENT: This person knows that they don't know and is probably working to get better. They are a junior person with potential. Such an individual bears watching—and possibly a little testing. Don't make it something too hard, but it should be a little scary, something that makes them stretch. See what happens. This is a good candidate to groom for middle management and in future years, senior management.
CONSCIOUSLY COMPENTENT: This is where the high performers stand. They will be in middle to senior management already. These people are two-for-one sales, all by themselves. They are both someone to be seen as a candidate to RECEIVE mentoring—for senior management—and the ideal person to PROVIDE mentoring for the Consciously Incompetent candidate. They have a high level of skill and consistently perform very well.
This person knows what they are doing, and remembers learning how to do it. They are not as capable as the Unconsciously Competent person. Nevertheless, they know what they know and they know how to transfer it to someone else--if they are motivated and are not afraid of losing their own place. If they know that they are part of something stable and long term and can afford to create their replacement—they are who you need. Because that is exactly what you want them to do. You want them to create their own replacement. You want them to bring up someone that will ask management for less, has a longer run in front of them and to know that they are not committing financial suicide by doing so.
Not all people in this category will make good Mentors as communication skills and a desire to teach are critical components to performing well in the role. I know many individuals who are extremely skilled and have the sort of knowledge that is transferable—but who could never serve this role with someone successfully. You need to keep other variables in mind.
- Communication Skills
- A temperament that tends toward explaining what they are doing, rather than keeping things "close to their vest."
- Good people skills
The people that will make the best Mentors are already doing it. They are respected by their peers as someone that is very free with their knowledge. They are just informal about it as there is no real structure. Find those people and give them a mandate, the time and some guidance and they will do a wonderful job for you.
UNCONSCIOUSLY COMPENTENT: The highest level. This person doesn't even know why they are so good anymore. Everything is so effortless that it is unconscious. This is the best you can get and you may only meet a handful of people like this in your career. Don't touch this person! There are two very good reasons why.
1) They are not replaceable or reproducible. They really are unique. Give them whatever they want to keep them doing what they do and don't distract them!
2) The other reason to keep them away from a mentoring program is because they make terrible mentors. They have no idea how they are doing what they are doing. They just do it--better than anyone else. But, they can't teach what they themselves don't really understand. Treat them as the gift that they are and get out of their way. Additionally, the probable failure in their attempt at mentoring will mess with their confidence. You don't want that.
There is a lot written on mentoring techniques, so I will not belabor the point. You, the IT Managers, may not have the authority or sense of security to set up this sort of program. I understand. However, if you want to do it and you have the authority, it isn't really hard to begin. There is a lot of material already in publication about various approaches. This is not a new concept. Available resources will probably not be specifically IT management related, but you can apply their lessons. My goal in this article is not to present something you have never heard of before. Rather, it is to remind you of what you already know--and to demonstrate how critical it has become to use that information.
Projects are also an opportunity. If you allow less capable people to work with more capable people, or more accurately, tag along, relationships can be created. Make the project oriented nature of our industry, which is its greatest weakness in this regard, become a new strength.
ATTENTION AMERICAN IT MANAGERS: Within the next decade most of your best people will retire or die. Your senior staffers are baby boomers with twenty years or more of experience in their field. They built the systems: they learned the operating systems as they were created: they know what they know from real life experience that cannot be learned in school. They are also somewhere between their late forties and early sixties. They rose to the top while competing within the largest workforce America has ever seen. When they leave they will take a level of efficiency and expertise with them that will take twenties years to replace.
To make matters worse, the population of appropriately educated Americans coming up behind them is far smaller than the population getting ready to move out. Do the math. Start now. To try to buy that talent later will not only cost you a fortune but you will be competing for a very small population of such individuals, with the entire world.
In this corporate environment where everyone is disposable and so much work is done by contractors so that companies can avoid having to make a commitment to personnel, it is far too easy to miss this growing danger. Business Managers may have learned this—but probably not. However, IT managers usually have not had much exposure to this concept. They live in a world of projects that staff up for the project and then disband. How to you bring up the next crop of leaders in such an environment? This is going to take far too many companies by surprise! But IT is going to be hit much harder than most other departments. I know of no other area of corporate life that is so project oriented. In the IT world, you build a team and disband it a few months later--even when they do outstanding work. All in the name of avoiding long term cost. It also avoids long term success.
There is also an emotional and psychological component to this problem. After the Dot-Bomb debacle, many people with decades of smarts were kicked out due to layoffs, or companies failing, or being eaten by a bigger company that had its own staff. Why do we eat our seed corn? Those that survived are still concerned about it happening again. And it could. It makes them conservative. Possibly even a tiny bit timid and less likely to share their knowledge freely. I don't blame them. Because our corporate mentality is to cut the expensive people and replace them with contractors—that we can easily get rid of when the job is done. What a vote of no confidence! This is considered to be a strategy. OK…I will accept that. It is a strategy. But it is a very short sighted one. Where is the mid-range planning?
So, what do you do? In this article and podcast, let us restrict our focus to Mentoring. Future articles and podcasts will explore other activities that are also proven and available.
Do you have a mentoring plan in place? I don't mean the typical, "oh, we believe in mentoring around here" kind of plan. I mean a thought out purposeful plan whereby you determine which journeyman IT personnel have the potential to grow into those senior roles and have your baby boomer senior staffers truly mentor them to bring them along. I doubt it. It does exist; I know of a few such companies. But it is rare.
Part of the problem for those that want to create a mentoring program is that it is not so simple to identify candidates. Let me help with that. Not everyone is a candidate for mentoring and few people are cut out to be mentors. It's sad but true. Don't spin your wheels and exhaust your enthusiasm backing the wrong plan and/or individuals. You need to have some way to identify in whom you want to invest. And, please understand, it is an investment. You will invest money but not only money. You will invest the time of very busy and critical people. That will hurt a bit—but you don't really have any choice. If you are responsible for future planning in your organization, ignoring this process is irresponsible.
Here is a handy way to help make these determinations. A friend once told me that he had learned in a sales course at IBM, decades ago, about a concept that went something like this—and I may be mangling it so please forgive me. It was not meant for IT or Mentorship purposes, but I have adapted it.
There are four levels of competence. They are listed in order from least capable to most capable in performing their job. Oddly, this does not represent the order in which they are most effective in a mentoring program.
- Unconsciously Incompetent
- Consciously Incompetent
- Consciously Competent
- Unconsciously Competent
UNCONSCIOUSLY INCOMPETENT: This person doesn't know that they don't know. They are not a candidate for this program—but may need help in learning to learn.
A famous story about Thomas Edison says that he used to test fresh new Engineers who wanted to work for him by putting them in a lab with a very unique and oddly shaped glass container. He would tell them to figure out the internal volume of the container. One time he watched a new graduate work out the problem by measuring all the diameters of the odd twists and turns of the glass and carefully making the calculations on his slide rule. When he presented the answer, Edison said, "You got the right answer, but I can't give you the job." The young man asked why and Edison responded by picking up the container, filling it with water and pouring into a graduated beaker, getting the answer in ten seconds. He said, "Son, I am glad you know the answer, but I'm afraid you just don't know the question." The Unconsciously Incompetent person does not know the question.
CONSCIOUSLY INCOMPETENT: This person knows that they don't know and is probably working to get better. They are a junior person with potential. Such an individual bears watching—and possibly a little testing. Don't make it something too hard, but it should be a little scary, something that makes them stretch. See what happens. This is a good candidate to groom for middle management and in future years, senior management.
CONSCIOUSLY COMPENTENT: This is where the high performers stand. They will be in middle to senior management already. These people are two-for-one sales, all by themselves. They are both someone to be seen as a candidate to RECEIVE mentoring—for senior management—and the ideal person to PROVIDE mentoring for the Consciously Incompetent candidate. They have a high level of skill and consistently perform very well.
This person knows what they are doing, and remembers learning how to do it. They are not as capable as the Unconsciously Competent person. Nevertheless, they know what they know and they know how to transfer it to someone else--if they are motivated and are not afraid of losing their own place. If they know that they are part of something stable and long term and can afford to create their replacement—they are who you need. Because that is exactly what you want them to do. You want them to create their own replacement. You want them to bring up someone that will ask management for less, has a longer run in front of them and to know that they are not committing financial suicide by doing so.
Not all people in this category will make good Mentors as communication skills and a desire to teach are critical components to performing well in the role. I know many individuals who are extremely skilled and have the sort of knowledge that is transferable—but who could never serve this role with someone successfully. You need to keep other variables in mind.
- Communication Skills
- A temperament that tends toward explaining what they are doing, rather than keeping things "close to their vest."
- Good people skills
The people that will make the best Mentors are already doing it. They are respected by their peers as someone that is very free with their knowledge. They are just informal about it as there is no real structure. Find those people and give them a mandate, the time and some guidance and they will do a wonderful job for you.
UNCONSCIOUSLY COMPENTENT: The highest level. This person doesn't even know why they are so good anymore. Everything is so effortless that it is unconscious. This is the best you can get and you may only meet a handful of people like this in your career. Don't touch this person! There are two very good reasons why.
1) They are not replaceable or reproducible. They really are unique. Give them whatever they want to keep them doing what they do and don't distract them!
2) The other reason to keep them away from a mentoring program is because they make terrible mentors. They have no idea how they are doing what they are doing. They just do it--better than anyone else. But, they can't teach what they themselves don't really understand. Treat them as the gift that they are and get out of their way. Additionally, the probable failure in their attempt at mentoring will mess with their confidence. You don't want that.
There is a lot written on mentoring techniques, so I will not belabor the point. You, the IT Managers, may not have the authority or sense of security to set up this sort of program. I understand. However, if you want to do it and you have the authority, it isn't really hard to begin. There is a lot of material already in publication about various approaches. This is not a new concept. Available resources will probably not be specifically IT management related, but you can apply their lessons. My goal in this article is not to present something you have never heard of before. Rather, it is to remind you of what you already know--and to demonstrate how critical it has become to use that information.
Projects are also an opportunity. If you allow less capable people to work with more capable people, or more accurately, tag along, relationships can be created. Make the project oriented nature of our industry, which is its greatest weakness in this regard, become a new strength.
Complete Overview of Linux
This article will discuss the differences between the Linux and Windows operating software’s; we discuss some of the pro’s and con’s of each system.
Let us first start out with a general overview of the Linux operating system. Linux at its most basic form is a computer kernel. The Kernel is the underlying computer code, used to communicate with hardware, and other system software, it also runs all of the basic functions of the computer.
The Linux Kernel is an operating system, which runs on a wide variety of hardware and for a variety of purposes. Linux is capable of running on devices as simple as a wrist watch, or a cell phone, but it can also run on a home computer using, for example Intel, or AMD processors, and its even capable of running on high end servers using Sun Sparc CPU’s or IBM power PC processors. Some Linux distro’s can only run one processor, while others can run many at once.
Common uses for Linux include that of a home desktop computing system, or more commonly for a server application, such as use as a web server, or mail server. You can even use Linux as a dedicated firewall to help protect other machines that are on the same network.
A programmer student named Linus Torvalds first made Linux as a variant of the Unix operating system in 1991. Linus Torvalds made Linux open source with the GNU (GPL) (General Public License), so other programmers could download the source code free of charge and alter it any way they see fit. Thousands of coders throughout the world began downloading and altering the source code of Linux, applying patches, and bug fixes, and other improvements, to make the OS better and better. Over the years Linux has gone from a simple text based clone of Unix, to a powerful operating software, with full-featured desktop environments, and unprecedented portability, and a variety of uses. Most of the original Unix code has also been gradually written out of Linux over the years.
As a result of Linux being open source software, there is no one version of Linux; instead there are many different versions or distributions of Linux, that are suited for a variety of different users and task. Some Distributions of Linux include Gentoo, and Slackware, which due to the lack of a complete graphical environment is best, suited for Linux experts, programmers, and other users that know their way around a command prompt. Distributions that lack a graphical environment are best suited for older computers lacking the processing power necessary to process graphics, or for computers performing processor intensive task, where it is desirable to have all of the system resources focused on the task at hand, rather than wasting resources by processing graphics. Other Linux distributions aim at making the computing experience as easy as possible. Distributions such as Ubuntu, or Linspire make Linux far easier to use, by offering full-featured graphical environments that help eliminate the need for a command prompt. Of course the downside of ease of use is less configurability, and wasted system resources on graphics processing. Other distributions such as Suse try to find a common ground between ease of use and configurability.
“Linux has two parts, they include the Kernel mentioned previously, and in most circumstances it will also include a graphical user interface, which runs atop the Kernel” reference #3. In most cases the user will communicate with the computer via the graphical user interface.
(ref #6) Some of the more common graphical environments that can run on Linux include the following. The KDE GUI (Graphical user interface). Matthias Ettrich developed KDE in 1996. He wanted a GUI for the Unix desktop that would make all of the applications look and feel alike. He also wanted a desktop environment for Unix that would be easier to use than the ones available at the time. KDE is a free open source project, with millions of coders working on it throughout the world, but it also has some commercial support from companies such as Novell, Troltech, and Mandriva. KDE aims to make an easy to use desktop environment without sacrificing configurability. Windows users might note that KDE has a similar look to Windows. Another popular GUI is (ref #7) GNOME. GNOME puts a heavy emphasis on simplicity, and user ability. Much like KDE GNOME is open source and is free to download. One notable feature of GNOME is the fact that it supports many different languages; GNOME supports over 100 different languages. Gnome is license under the LGPL license (lesser general public license). The license allows applications written for GNOME to use a much wider set of licenses, including some commercial applications. The name GNOME stands for GNU Network object model environment. GNOME’s look and feel is similar to that of other desktop environments. Fluxbox is another example of a Linux GUI. With less of an emphasis on ease of use and eye candy, Fluxbox aims to be a very lightweight, and a more efficient user of system resources. The interface has only a taskbar and a menu bar, which is accessed by right clicking over the desktop. Fluxbox is most popular for use with older computers that have a limited abundance of system resources.
Although most Linux distributions offer a graphical environment, to simplify the user experience, they all also offer a way for more technically involved users to directly communicate with the Kernel via a shell or command line. The command line allows you to run the computer without a GUI, by executing commands from a text-based interface. An advantage of using the command prompt is it uses less system resources and enables your computer to focus more of its energy on the task at hand. Examples of commands include the cd command for changing your directory, or the halt command for shutting down your system, or the reboot command for restarting the computer ect.
Now that we are more familiar with the Linux operating system, we can note the many ways in which Linux differs from the worlds most popular OS, Microsoft Windows. From this point forward we will discuss some of the more prominent ways in which Linux deferrers from Windows.
For starters there is only one company that releases a Windows operating system, and that company is Microsoft. All versions of Windows, weather Windows XP Home, Business, or Vista, all updates, security patches, and service patches for Windows comes from Microsoft. With Linux on the other hand there is not one company that releases it. Linux has millions of coders and companies throughout the world, volunteering their time to work on patches, updates, newer versions, and software applications. Although some companies, charge for TECH support, and others charge for their distribution of Linux, by packaging it with non-free software, you will always be able to get the Linux Kernel for free, and you can get full-featured Linux desktops with all the necessary applications for general use, for free as well. The vendors that charge money for their distribution of Linux are also required to release a free version in order to comply with the GPL License agreement. With Microsoft Windows on the other hand you have to pay Microsoft for the software, and you will also have to pay for most of the applications that you will use.
Windows and Linux also differ on TECH support issues. Windows is backed by the Microsoft Corporation, which means that if you have an issue with any of their products the company should resolve it. For example if Microsoft Windows is not working right, then you should be able to call Microsoft and make use of their TECH support to fix the issue. TECH support is usually included with the purchase of the product for a certain amount of time, maybe a two year period, and from there on you may be charged for the service. Although IBM backs their Linux products, for the most part if you use Linux you are on your own. If you have a problem with Ubuntu Linux you cannot call Ubuntu and expect any help. Despite the lack of professional help, you can however receive good TECH advice, from the thousands or millions of Linux forums that are on the web. You ca also get great help from social networking sites such as Myspace, by posting questions in the many Linux groups. You can usually receive responses for your questions in a matter of hours form many qualified people.
Configurability is another key difference between the two operating software’s. Although Windows offers its control panel to help users configure the computer to their liking, it does not match the configuring options that Linux provides especially if you are a real TECH savvy user. In Linux the Kernel is open source, so if you have the know how, you can modify it in virtually any way that you see fit. Also Linux offers a variety of Graphical environments to further suit your needs. As mentioned earlier Linux is capable of running full-featured graphical environments like KDE, or more lightweight and resource friendly GUI’s like Fluxbox, or Blackbox, to suit users with older computers. There are also versions of Linux that are designed to emulate the Windows look and feel as closely as possible. Distributions such as Linspire are best suited for users that are migrating over from the Windows world. There are also distributions that include no graphical environment at all to better suit users that need to squeeze out all of the computing power that they can get for various computing activities, and for users that are more advanced than others. All of this configurability can be problematic sometimes, as you will have to make a decision on which desktop is right for you, and to make things easier on yourself you will need to only install applications that are native to your distribution and graphical environment.
(ref #1) The cost effectiveness of Linux is another way it separates itself from Windows. For home use Linux is cheap and in most cases completely free, while Windows varies in cost depending on which version you buy. With Linux most of the applications will also be free, however for Windows in the majority of cases you are suppose to pay for the applications. For most cases, with Linux there is no need to enter a product activation key when performing an installation, you are free to install it on as many computers as you’d like. With Windows you are only allowed to install it on one computer and Microsoft uses product activation software to enforce this rule. When installing Window’s you must enter a product activation key, which will expire after so many uses. If you wish too, you can purchase Linux from a variety of vendors, which will include a boxed set of CDs, Manuals, and TECH support for around 40-130$. Of course If you purchase a high-end version of Linux used for servers it may cost any where from 400$- 2000$. “In 2002 computer world magazine quoted the chief technology architect at Merrill Lynch in New York, as saying “the cost of running Linux is typically a tenth of the cost of running Unix or Windows alternatively.” (ref#1)
(ref #1) Installation of Windows is generally easier, than installing Linux. “With Windows XP there are three main ways to install. There is a clean install, in which you install Windows on a blank hard drive. There is also an upgrade install, in which you start with an older version of Windows and “upgrade” to a newer one. An advantage of upgrading is that all of the files on the older system should remain intact throughout the process. You can also perform a repair install, in which case you are installing the same version of Windows on top of itself in order to fix a damaged version of Windows. There is also a recovery, which Technically is not an install; it is used to restore a copy of Windows back to its factory settings. The disadvantage of recovering Windows is the fact that you will loose all of your data, which resides on the damaged copy of Windows.” (ref#1) Also with Windows you can rest assured that your hardware will most likely be supported by the operating software, although this is not much of a problem with Linux you cant be sure if Linux will support all of your hardware. With Linux installation varies greatly from Distro to Distro. You may be presented with a graphical installer or it may be a text-based installer, these variations make Linux a bit more difficult and unpredictable to install than is Windows, (although the difficulty is disappearing). You may perform a clean install of Linux or dual boot it, to co-exist with another operation software. With Linux rather than having to buy an upgrade Cd, you can install updates by downloading and then installing them while your desktop is running. With Linux it is also not necessary to reboot your computer after most upgrades, It is only necessary to reboot after an upgrade to the kernel. It is also possible to run Linux without ever needing to install it on a hard drive; there are many distributions of Linux that will allow you to run it straight off of a live cd. The advantage of this is that you do not need to alter your system in order to
try Linux. You can run Linux off of the CD so you do not have to damage your Windows partition. Other advantages include the ability to rescue a broken Linux system. If your Linux computer will not boot, then you may insert a live cd and boot off it, so you can repair the damaged version of Linux. Also you may use a Linux live cd to recover files from a damaged Windows computer that will no longer boot up. Since Linux is capable of reading NTFS files you may copy files form a Windows computer to a USB flash drive or floppy drive ect.
Another major difference between Linux and Windows is the applications that you will use with either OS. Windows includes a much wider abundance of commercially backed applications than does Linux. It is much easier to find the software that you are looking for with Windows than it is with Linux, because so many software vendors make their products compatible with Windows only. With Linux you will for the most part be forced to let go of the familiar applications that you have grown accustomed to with Windows, in favor of lesser-known open source apps that are made for Linux. Applications such as Microsoft office, Outlook, Internet Explorer, Adobe Creative suite, and chat clients such as MSN messenger, do not work natively with Linux. Although with Linux you can get Microsoft office and Adobe creative suite to work using software from codeWeavers called cross Over Office. Instead of using these applications you will need to use Linux apps such as open office, The Gimp Image Editor, The ThunderBird email client, Instead of the MSN messenger you can use the GAIM messenger, and you can use Firefox as your web browser. Also with Linux it can be difficult to install software even if it is made for Linux. This is due to the fact that Linux has so many different versions. Software that is made to install on one version probably will require some configuration in order to install on another version. An example would be if you were trying to install software that was made for the KDE graphical environment, on the GNOME GUI, This app would not easily install on the GNOME GUI, and would require some configuring on your part to successfully install it.
The type of hard ware that Linux and windows runs on also causes them to differ. Linux will run on many different hardware platforms, from Intel and AMD chips, to computers running IBM power Pc processors. Linux will run on the slowest 386 machines to the biggest mainframes on the planet, newer versions of Windows will not run on the same amount of hardware as Linux. Linux can even be configured to run on apples, Ipod’s, or smart phones. A disadvantage of Linux is when it comes to using hardware devices such as Printers, Scanners, or Digital camera’s. Where as the driver software for these devices will often be easily available for Windows, with Linux you are for the most part left on your own to find drivers for these devices. Most Linux users will find comfort in the fact that drivers for the latest hardware are constantly being written by coders throughout the world and are usually very quickly made available.
(ref #1) One of the most notable differences between the two operating software’s is Windows legendary problems with malicious code, known as Viruses and Spy ware. Viruses, Spy-ware and a general lack of security are the biggest problems facing the Windows community. Under Windows Viruses and Spy-ware have the ability to execute themselves with little or no input from the user. This makes guarding against them a constant concern for any Windows user. Windows users are forced to employ third party anti virus software to help limit the possibility of the computer being rendered useless by malicious code. Anti virus software often has the negative side effect of hogging system resources, thus slowing down your entire computer, also most anti virus software requires that you pay a subscription service, and that you constantly download updates in order to stay ahead of the intruders. With Linux on the other hand problems with viruses are practically non-existent, and in reality you do not even need virus protection for your Linux machine. One reason why Viruses and Spy-ware are not a problem for Linux is simply due to the fact that there are far fewer being made for Linux. A more important reason is that running a virus on a Linux machine is more difficult and requires a lot more input from the user. With Windows you may accidentally run and execute a virus, by opening an email attachment, or by double clicking on a file that contains malicious code. However with Linux a virus would need to run in the terminal, which requires the user to give the file execute permissions, and then open it in the terminal. And in order to cause any real damage to the system the user would have to log in as root, by typing a user name and password before running the virus. Foe example to run a virus that is embedded in an email attachment the user would have to, open the attachment, then save it, then right click the file and chose properties form the menu, in properties they can give it execute permissions, they would then be able to
open the file in the terminal to run the virus. And even then the user would only be able to damage his or her home folder, all other users data will be left untouched, and all root system files would also remain untouched, because Linux would require a root password to make changes to these files. The only way the user can damage the whole computer would be if he or she logged in as root user by providing the root user name and password to the terminal before running the virus. Unlike Windows in Linux an executable file cannot run automatically, It needs to be given execute permissions manually this significantly improves security. In Linux the only realistic reason you would need virus protection is if you share files with Windows users, and that is to protect them not you, so you are not to accidentally pass a virus to the Windows computer that you are sharing files with.
The above was a general over view of some differences between the Windows operating system, and Linux. To recap we started with the fact that Windows has only one vendor that releases the software, while Linux comes from millions of different coders throughout the world. We also commented on the fact that the Linux Kernel and much of the applications used with it are completely free of charge, where as with windows you are forced to pay for most of the software. Unlike Widows Linux is often lacking in professional Tech support, and Linux users are often left on their own to solve Technical issues. Linux users can either pay for Tech support or rely on the many Linux Forums and groups available on the Internet. Due to the fact that the kernel is open source, Linux has a huge advantage over Windows in configurability. You can configure Linux to run almost any way you see fit by manipulating the Kernel. Installing the Windows Operating software and applications is easier due to the fact that it has a universal installer. Also finding applications for Windows is easier because of its popularity most apps are available for Windows only, and are made easily available. Linux will run on a greater variety of hard ware than does Windows, from mainframe super computers running multiple IBM Power PC Chips, to a small laptop running an AMD processor. And of course the biggest difference in this writer’s opinion is the fact that Linux does not suffer from an onslaught of Viruses and other malicious code, unlike Windows which is plagued by countless number of malicious code that can easily destroy your system if not properly guarded against.
In conclusion we will conclude that the Linux OS really is the superior software. Other than a few minor nuisances, linux out performs Windows in most categories. The fact that Linux is more secure is the tipping point, that tilts the scales in the favor of Linux. Windows simply suffers from far to many security vulnerabilities for it to be considered the better over all desktop environment.
References
http://www.michaelhorowitz.com/Linux.vs.Windows.html Reference #1
http://www.theinquirer.net/en/inquirer/news/2004/10/27/linux-more-secure-than-windows-says-study Reference #2
http://www.linux.com/whatislinux/ reference number 3
http://www.linux.org/info/
Reference #4
http://en.wikipedia.org/wiki/Linux%5Fkernel Reference #5
http://en.wikipedia.org/wiki/KDE Reference #6
http://en.wikipedia.org/wiki/GNOME Reference #7
Let us first start out with a general overview of the Linux operating system. Linux at its most basic form is a computer kernel. The Kernel is the underlying computer code, used to communicate with hardware, and other system software, it also runs all of the basic functions of the computer.
The Linux Kernel is an operating system, which runs on a wide variety of hardware and for a variety of purposes. Linux is capable of running on devices as simple as a wrist watch, or a cell phone, but it can also run on a home computer using, for example Intel, or AMD processors, and its even capable of running on high end servers using Sun Sparc CPU’s or IBM power PC processors. Some Linux distro’s can only run one processor, while others can run many at once.
Common uses for Linux include that of a home desktop computing system, or more commonly for a server application, such as use as a web server, or mail server. You can even use Linux as a dedicated firewall to help protect other machines that are on the same network.
A programmer student named Linus Torvalds first made Linux as a variant of the Unix operating system in 1991. Linus Torvalds made Linux open source with the GNU (GPL) (General Public License), so other programmers could download the source code free of charge and alter it any way they see fit. Thousands of coders throughout the world began downloading and altering the source code of Linux, applying patches, and bug fixes, and other improvements, to make the OS better and better. Over the years Linux has gone from a simple text based clone of Unix, to a powerful operating software, with full-featured desktop environments, and unprecedented portability, and a variety of uses. Most of the original Unix code has also been gradually written out of Linux over the years.
As a result of Linux being open source software, there is no one version of Linux; instead there are many different versions or distributions of Linux, that are suited for a variety of different users and task. Some Distributions of Linux include Gentoo, and Slackware, which due to the lack of a complete graphical environment is best, suited for Linux experts, programmers, and other users that know their way around a command prompt. Distributions that lack a graphical environment are best suited for older computers lacking the processing power necessary to process graphics, or for computers performing processor intensive task, where it is desirable to have all of the system resources focused on the task at hand, rather than wasting resources by processing graphics. Other Linux distributions aim at making the computing experience as easy as possible. Distributions such as Ubuntu, or Linspire make Linux far easier to use, by offering full-featured graphical environments that help eliminate the need for a command prompt. Of course the downside of ease of use is less configurability, and wasted system resources on graphics processing. Other distributions such as Suse try to find a common ground between ease of use and configurability.
“Linux has two parts, they include the Kernel mentioned previously, and in most circumstances it will also include a graphical user interface, which runs atop the Kernel” reference #3. In most cases the user will communicate with the computer via the graphical user interface.
(ref #6) Some of the more common graphical environments that can run on Linux include the following. The KDE GUI (Graphical user interface). Matthias Ettrich developed KDE in 1996. He wanted a GUI for the Unix desktop that would make all of the applications look and feel alike. He also wanted a desktop environment for Unix that would be easier to use than the ones available at the time. KDE is a free open source project, with millions of coders working on it throughout the world, but it also has some commercial support from companies such as Novell, Troltech, and Mandriva. KDE aims to make an easy to use desktop environment without sacrificing configurability. Windows users might note that KDE has a similar look to Windows. Another popular GUI is (ref #7) GNOME. GNOME puts a heavy emphasis on simplicity, and user ability. Much like KDE GNOME is open source and is free to download. One notable feature of GNOME is the fact that it supports many different languages; GNOME supports over 100 different languages. Gnome is license under the LGPL license (lesser general public license). The license allows applications written for GNOME to use a much wider set of licenses, including some commercial applications. The name GNOME stands for GNU Network object model environment. GNOME’s look and feel is similar to that of other desktop environments. Fluxbox is another example of a Linux GUI. With less of an emphasis on ease of use and eye candy, Fluxbox aims to be a very lightweight, and a more efficient user of system resources. The interface has only a taskbar and a menu bar, which is accessed by right clicking over the desktop. Fluxbox is most popular for use with older computers that have a limited abundance of system resources.
Although most Linux distributions offer a graphical environment, to simplify the user experience, they all also offer a way for more technically involved users to directly communicate with the Kernel via a shell or command line. The command line allows you to run the computer without a GUI, by executing commands from a text-based interface. An advantage of using the command prompt is it uses less system resources and enables your computer to focus more of its energy on the task at hand. Examples of commands include the cd command for changing your directory, or the halt command for shutting down your system, or the reboot command for restarting the computer ect.
Now that we are more familiar with the Linux operating system, we can note the many ways in which Linux differs from the worlds most popular OS, Microsoft Windows. From this point forward we will discuss some of the more prominent ways in which Linux deferrers from Windows.
For starters there is only one company that releases a Windows operating system, and that company is Microsoft. All versions of Windows, weather Windows XP Home, Business, or Vista, all updates, security patches, and service patches for Windows comes from Microsoft. With Linux on the other hand there is not one company that releases it. Linux has millions of coders and companies throughout the world, volunteering their time to work on patches, updates, newer versions, and software applications. Although some companies, charge for TECH support, and others charge for their distribution of Linux, by packaging it with non-free software, you will always be able to get the Linux Kernel for free, and you can get full-featured Linux desktops with all the necessary applications for general use, for free as well. The vendors that charge money for their distribution of Linux are also required to release a free version in order to comply with the GPL License agreement. With Microsoft Windows on the other hand you have to pay Microsoft for the software, and you will also have to pay for most of the applications that you will use.
Windows and Linux also differ on TECH support issues. Windows is backed by the Microsoft Corporation, which means that if you have an issue with any of their products the company should resolve it. For example if Microsoft Windows is not working right, then you should be able to call Microsoft and make use of their TECH support to fix the issue. TECH support is usually included with the purchase of the product for a certain amount of time, maybe a two year period, and from there on you may be charged for the service. Although IBM backs their Linux products, for the most part if you use Linux you are on your own. If you have a problem with Ubuntu Linux you cannot call Ubuntu and expect any help. Despite the lack of professional help, you can however receive good TECH advice, from the thousands or millions of Linux forums that are on the web. You ca also get great help from social networking sites such as Myspace, by posting questions in the many Linux groups. You can usually receive responses for your questions in a matter of hours form many qualified people.
Configurability is another key difference between the two operating software’s. Although Windows offers its control panel to help users configure the computer to their liking, it does not match the configuring options that Linux provides especially if you are a real TECH savvy user. In Linux the Kernel is open source, so if you have the know how, you can modify it in virtually any way that you see fit. Also Linux offers a variety of Graphical environments to further suit your needs. As mentioned earlier Linux is capable of running full-featured graphical environments like KDE, or more lightweight and resource friendly GUI’s like Fluxbox, or Blackbox, to suit users with older computers. There are also versions of Linux that are designed to emulate the Windows look and feel as closely as possible. Distributions such as Linspire are best suited for users that are migrating over from the Windows world. There are also distributions that include no graphical environment at all to better suit users that need to squeeze out all of the computing power that they can get for various computing activities, and for users that are more advanced than others. All of this configurability can be problematic sometimes, as you will have to make a decision on which desktop is right for you, and to make things easier on yourself you will need to only install applications that are native to your distribution and graphical environment.
(ref #1) The cost effectiveness of Linux is another way it separates itself from Windows. For home use Linux is cheap and in most cases completely free, while Windows varies in cost depending on which version you buy. With Linux most of the applications will also be free, however for Windows in the majority of cases you are suppose to pay for the applications. For most cases, with Linux there is no need to enter a product activation key when performing an installation, you are free to install it on as many computers as you’d like. With Windows you are only allowed to install it on one computer and Microsoft uses product activation software to enforce this rule. When installing Window’s you must enter a product activation key, which will expire after so many uses. If you wish too, you can purchase Linux from a variety of vendors, which will include a boxed set of CDs, Manuals, and TECH support for around 40-130$. Of course If you purchase a high-end version of Linux used for servers it may cost any where from 400$- 2000$. “In 2002 computer world magazine quoted the chief technology architect at Merrill Lynch in New York, as saying “the cost of running Linux is typically a tenth of the cost of running Unix or Windows alternatively.” (ref#1)
(ref #1) Installation of Windows is generally easier, than installing Linux. “With Windows XP there are three main ways to install. There is a clean install, in which you install Windows on a blank hard drive. There is also an upgrade install, in which you start with an older version of Windows and “upgrade” to a newer one. An advantage of upgrading is that all of the files on the older system should remain intact throughout the process. You can also perform a repair install, in which case you are installing the same version of Windows on top of itself in order to fix a damaged version of Windows. There is also a recovery, which Technically is not an install; it is used to restore a copy of Windows back to its factory settings. The disadvantage of recovering Windows is the fact that you will loose all of your data, which resides on the damaged copy of Windows.” (ref#1) Also with Windows you can rest assured that your hardware will most likely be supported by the operating software, although this is not much of a problem with Linux you cant be sure if Linux will support all of your hardware. With Linux installation varies greatly from Distro to Distro. You may be presented with a graphical installer or it may be a text-based installer, these variations make Linux a bit more difficult and unpredictable to install than is Windows, (although the difficulty is disappearing). You may perform a clean install of Linux or dual boot it, to co-exist with another operation software. With Linux rather than having to buy an upgrade Cd, you can install updates by downloading and then installing them while your desktop is running. With Linux it is also not necessary to reboot your computer after most upgrades, It is only necessary to reboot after an upgrade to the kernel. It is also possible to run Linux without ever needing to install it on a hard drive; there are many distributions of Linux that will allow you to run it straight off of a live cd. The advantage of this is that you do not need to alter your system in order to
try Linux. You can run Linux off of the CD so you do not have to damage your Windows partition. Other advantages include the ability to rescue a broken Linux system. If your Linux computer will not boot, then you may insert a live cd and boot off it, so you can repair the damaged version of Linux. Also you may use a Linux live cd to recover files from a damaged Windows computer that will no longer boot up. Since Linux is capable of reading NTFS files you may copy files form a Windows computer to a USB flash drive or floppy drive ect.
Another major difference between Linux and Windows is the applications that you will use with either OS. Windows includes a much wider abundance of commercially backed applications than does Linux. It is much easier to find the software that you are looking for with Windows than it is with Linux, because so many software vendors make their products compatible with Windows only. With Linux you will for the most part be forced to let go of the familiar applications that you have grown accustomed to with Windows, in favor of lesser-known open source apps that are made for Linux. Applications such as Microsoft office, Outlook, Internet Explorer, Adobe Creative suite, and chat clients such as MSN messenger, do not work natively with Linux. Although with Linux you can get Microsoft office and Adobe creative suite to work using software from codeWeavers called cross Over Office. Instead of using these applications you will need to use Linux apps such as open office, The Gimp Image Editor, The ThunderBird email client, Instead of the MSN messenger you can use the GAIM messenger, and you can use Firefox as your web browser. Also with Linux it can be difficult to install software even if it is made for Linux. This is due to the fact that Linux has so many different versions. Software that is made to install on one version probably will require some configuration in order to install on another version. An example would be if you were trying to install software that was made for the KDE graphical environment, on the GNOME GUI, This app would not easily install on the GNOME GUI, and would require some configuring on your part to successfully install it.
The type of hard ware that Linux and windows runs on also causes them to differ. Linux will run on many different hardware platforms, from Intel and AMD chips, to computers running IBM power Pc processors. Linux will run on the slowest 386 machines to the biggest mainframes on the planet, newer versions of Windows will not run on the same amount of hardware as Linux. Linux can even be configured to run on apples, Ipod’s, or smart phones. A disadvantage of Linux is when it comes to using hardware devices such as Printers, Scanners, or Digital camera’s. Where as the driver software for these devices will often be easily available for Windows, with Linux you are for the most part left on your own to find drivers for these devices. Most Linux users will find comfort in the fact that drivers for the latest hardware are constantly being written by coders throughout the world and are usually very quickly made available.
(ref #1) One of the most notable differences between the two operating software’s is Windows legendary problems with malicious code, known as Viruses and Spy ware. Viruses, Spy-ware and a general lack of security are the biggest problems facing the Windows community. Under Windows Viruses and Spy-ware have the ability to execute themselves with little or no input from the user. This makes guarding against them a constant concern for any Windows user. Windows users are forced to employ third party anti virus software to help limit the possibility of the computer being rendered useless by malicious code. Anti virus software often has the negative side effect of hogging system resources, thus slowing down your entire computer, also most anti virus software requires that you pay a subscription service, and that you constantly download updates in order to stay ahead of the intruders. With Linux on the other hand problems with viruses are practically non-existent, and in reality you do not even need virus protection for your Linux machine. One reason why Viruses and Spy-ware are not a problem for Linux is simply due to the fact that there are far fewer being made for Linux. A more important reason is that running a virus on a Linux machine is more difficult and requires a lot more input from the user. With Windows you may accidentally run and execute a virus, by opening an email attachment, or by double clicking on a file that contains malicious code. However with Linux a virus would need to run in the terminal, which requires the user to give the file execute permissions, and then open it in the terminal. And in order to cause any real damage to the system the user would have to log in as root, by typing a user name and password before running the virus. Foe example to run a virus that is embedded in an email attachment the user would have to, open the attachment, then save it, then right click the file and chose properties form the menu, in properties they can give it execute permissions, they would then be able to
open the file in the terminal to run the virus. And even then the user would only be able to damage his or her home folder, all other users data will be left untouched, and all root system files would also remain untouched, because Linux would require a root password to make changes to these files. The only way the user can damage the whole computer would be if he or she logged in as root user by providing the root user name and password to the terminal before running the virus. Unlike Windows in Linux an executable file cannot run automatically, It needs to be given execute permissions manually this significantly improves security. In Linux the only realistic reason you would need virus protection is if you share files with Windows users, and that is to protect them not you, so you are not to accidentally pass a virus to the Windows computer that you are sharing files with.
The above was a general over view of some differences between the Windows operating system, and Linux. To recap we started with the fact that Windows has only one vendor that releases the software, while Linux comes from millions of different coders throughout the world. We also commented on the fact that the Linux Kernel and much of the applications used with it are completely free of charge, where as with windows you are forced to pay for most of the software. Unlike Widows Linux is often lacking in professional Tech support, and Linux users are often left on their own to solve Technical issues. Linux users can either pay for Tech support or rely on the many Linux Forums and groups available on the Internet. Due to the fact that the kernel is open source, Linux has a huge advantage over Windows in configurability. You can configure Linux to run almost any way you see fit by manipulating the Kernel. Installing the Windows Operating software and applications is easier due to the fact that it has a universal installer. Also finding applications for Windows is easier because of its popularity most apps are available for Windows only, and are made easily available. Linux will run on a greater variety of hard ware than does Windows, from mainframe super computers running multiple IBM Power PC Chips, to a small laptop running an AMD processor. And of course the biggest difference in this writer’s opinion is the fact that Linux does not suffer from an onslaught of Viruses and other malicious code, unlike Windows which is plagued by countless number of malicious code that can easily destroy your system if not properly guarded against.
In conclusion we will conclude that the Linux OS really is the superior software. Other than a few minor nuisances, linux out performs Windows in most categories. The fact that Linux is more secure is the tipping point, that tilts the scales in the favor of Linux. Windows simply suffers from far to many security vulnerabilities for it to be considered the better over all desktop environment.
References
http://www.michaelhorowitz.com/Linux.vs.Windows.html Reference #1
http://www.theinquirer.net/en/inquirer/news/2004/10/27/linux-more-secure-than-windows-says-study Reference #2
http://www.linux.com/whatislinux/ reference number 3
http://www.linux.org/info/
Reference #4
http://en.wikipedia.org/wiki/Linux%5Fkernel Reference #5
http://en.wikipedia.org/wiki/KDE Reference #6
http://en.wikipedia.org/wiki/GNOME Reference #7
E-Commerce Strategy Development - Online Music Case Study
The UK online music market is a potentially huge market. Over the last eighteen months a great number of legitimate music services like ourselves have emerged to take advantage of the new music distribution model pioneered by Napster’s Shawn Fanning in 1999. Although we currently hold 35% of the online music market, we will have to continue to develop our strategy and online practises if we want to build our market share and compete with the big international competitors, namely the iTunes network. This document is both an analysis of our current strategy and a proposal to extend it.
Analysis of current system
The strategy we have developed over the last two years centres around selling songs on a price per song basis. This is one basic strategy that all online music vendors have adopted. One of the key factors in Apple’s success was its famously low 99 cents per song price tag. Because of this, we, like many other online music providers will find it very difficult to compete in pricing. According to popular legend, Apple secured this low price by refusing to sign the terms offered by the record labels then going ahead and launching iTunes anyway, daring the record labels to pull out. Labels have repeatedly tried to renegotiate this deal to no avail. None of the labels are willing to risk pulling out of the iTunes network and losing their foothold in the paid download business. As well as ‘pay per song’ there are a number of other tactics for selling music online. One method proposed by Ken Hertz, who represents Alanis Morrissette among other recording artists, is a flat fee collective licensing system. In flat fee collective licensing customers pay a fixed subscription fee to be allowed to download as much content as they want. This income is then divided among the content providers based on the percentage uptake of their content, as opposed to the unit uptake of their content (Fisher. WW, 2004). Fisher believes this model will lead to a reduced profit per song but an increased uptake of the service. This has already been shown to be an effective business model when applied to video rental. Having been pioneered by Blockbuster with their £13.99 online video rental service it has since been adopted by Amazon and Screen Select to provide similar services. I believe this model would be successful for us as it lets customers believe that by using the service regularly they are getting good value for money. Value for money has been a sticking point for music fans for a long time. Often, many people justify using illegal services like Emule or Limewire by claiming that the cost of purchasing music legally is excessive for
the product. The main problem with this model is that it would require the content owners (the record labels) to license their work to distributors.
Review of competitor activity
Our market is currently divided among a number of legal and illegal music services. These include services like Amazon where you can order a physical copy of a music CD online, services like the new Napster where you can search and download both free and paid for music and (semi) illegal services like Emule P2P where you can download anything you want for free.
Napster:
Napster has been involved in mainstream online digital distribution of music longer than any other company, and is arguably the most famous company in the field. Napster was launched as a free music sharing facility in 1999, and faced legal battles from the outset. It was finally forced to succumb to business pressure in 2001, at which point it began the six month process of re-inventing itself as a legal service. This brings us to the Napster we see today. Napster currently offers its customers three packages, ‘Napster To Go’ for £14.95 a month, Napster Membership for £9.95 and Napster lite. ‘Napster To Go’ and Napster Membership allows customers to download as much music as they like to their Computer. ‘Napster To Go’ also allows music to be downloaded via special terminals in high street stores and internet ready televisions direct to MP3 players. Napster Lite is Napster’s basic free package. It allows customers to pay for music on a per song basis at 79p. Customers can choose what they want by listening to 30 second segments of the content before they commit to purchasing. Napster lite also allows its users to access music stored on other Napster users shared space, but this is carefully screened to prevent piracy. One flaw in the Napster system is that in order to continue using the content you download, you need to keep paying for the service. This will to lead to consumer scepticism as people won’t like the idea of being trapped in the service to keep their music collection.
iTunes Music Store:
iTunes music store opened its door for service on 28th April 2003. Its strongest asset is its seamless operation with the Apple iPod. The iPod is the most popular MP3 player currently available and like most Apple products, usability was high on its priority list during development. The existing popularity of the iPod and iTunes combined made the process of extending the commercial attributes of the system a simple task. The music content is protected with Apple’s fair play digital rights management (DRM) but there are several hacks for this which Apple has so far been unsuccessful in blocking. Since its launch the range of features it offers has continued to expand. You can now buy gift certificates, download video and special content, create your own iTunes store and upload your own music via garage band, Apple’s music production suite. By allowing their users to produce and sell their own music apple has opened the door for their service to be used in many novel ways. For example Stanford has recently started using iTunes to freely distribute special academic and promotional content centred around learning and living on campus.
P2P networks:
The P2P networks are arguably the greatest threat to our model of business. Despite frequent law suites and attempts at sabotage record labels have been unable shut them down. A peer to peer network is basically a distributed file system where the shared content on every connected computer gets grouped together into one super directory. A search facility then allows connected users to find and download what they want to their local shared directory. As content gets spread across the network it becomes more accessible to other users. From the users perspective this has the advantage of being free, but the disadvantage of being unreliable. Content is often mis-labelled or incomplete. There are a number of tactics employed by content owners to further disrupt P2P activity including suing downloader’s, distributing mis-labelled content, and distributing content that harms a downloader’s computer. Record labels have also attempted to stop piracy at the source, by preventing users from uploading their music to their computers. However, this method has proven unsuccessful as it can be easily circumvented using real time encoding software, which encodes the music straight off the microphone jack. Record labels have also been sued by consumer rights groups and had their reputations tarnished over the legality of this tactic.
Online CD sales:
Among many consumers a consensus seems to have formed that paid music comes with CD and downloaded music is free. I personally like to have something physical to own when I purchase music. For this reason online CD sales are still very popular. CDNOW, Amazon and HMV online are some of the most popular retailers for this in the UK. A CD has the advantage of being a more tangible asset than a download and is therefore better suited to being given as a present, which will make a big difference to sales over the holiday season. It also doesn’t require the same expertise to use as a downloaded track. A CD essentially works like a little metal version of a vinyl. It is self explanatory to every generation how to make a CD player play a CD where as many people, particularly in older generations don’t know how to use a computer. This gives a CD a much wider potential audience. It may be beneficial for us to also consider selling music on physical media.
E-Commerce strategy
In order to plan our future direction we need to take stock of our current position. We can do this using a SWOT analysis.
Strengths:
1) We currently hold 35% of the UK downloaded music market, in business terms this equates to a majority. This is a large base of customers who will hopefully stay with us if we can continue to extend our services to compete with those of our competitors.
2) With the help of this plan we have a number of new revenue streams that we will hopefully implement soon. These will, if implemented properly, lead to an increase in our revenue and customer base.
Weaknesses:
1) We have not attempted to compete in the international downloaded music market. It makes no sense for us to only sell to UK customers. Traditional geographic limitations don’t apply on the internet. The complication of extending our system to sell music in many currencies is small compared to the benefit of increasing our potential customer base a hundred fold.
2) We don’t yet have systems in place to deal with things like gift vouchers or coupons that could be used for promotion.
Opportunities:
1) We current only allow our customers to purchase one song at a time off us. We could also allow them to purchase whole albums or customised content off us.
2) Although iTunes has secured a much better per song price than we could, they do not currently offer a subscription service. Our second most popular competitor, Napster does offer a subscription service but their customers have to continue paying for the service to continue using the content they’ve downloaded. If we can negotiate a subscription service that doesn’t lock the customer in we will be seen as the superior service.
3) iTunes is never advertised by itself. It’s always ‘iPod + iTunes’. If we can adopt a similar music player, develop our software to work seamlessly with it and negotiate cross promotion we will be doubling our exposure and simplifying the use of our service for the customers. This would also allow us to extend our service in a similar way to ‘Napster To Go’. We could begin to sell our content in high street stores using dedicated terminals or via internet television. This would allow our customer base to grow beyond the computer literate.
Threats:
1) File sharing networks offer the same service as us for free. Attempts to close these services down have so far been mostly ineffective. Although the close of Napster in 2001 was highly publicised it was ineffective as by this point many more services with more tenable legal position had emerged.
2) Many people expect to get something tangible like a CD or DVD when they buy music. One of the major tasks that faces the downloaded music industry is convincing people of the value of an intangible asset like a computer file.
3) Our primary competitors, Napster and iTunes continue have a larger international customer base than us. They have more exposure and more assets to extend their service with. We can’t hope to compete by trying to out compete in existing models, we need to develop new methods of selling music.
4) Our primary competitor, iTunes, has negotiated excellent prices with the content providers. Without the same economies of scale on our side it will be difficult to make the same deal.
In order to build what we have achieved so far I have compiled the following list of extension to our service that we could implement in the near future:
1) Develop a subscription service – We should develop a subscription service based on flat fee collective licensing that doesn’t trap customers in the same way as Napster’s services. This will be seen as a superior product by our target audience as it allows them to get good value for money from the service.
2) Custom CD service - In order to take advantage of gift buying in the holiday season, we should provide a service where customers select a set of tracks to be put on a CD or DVD, design a cover, and maybe add a personal message. The CD will then be burned and the packaging will be printed and sent to the customer for an additional fee. Basically what I’m proposing is a professionally produced version of a mix tape. This provides an extra income for us on top of the audio track sales and gives the customer something physical to give as a present. This is a service that none of the music-download companies I have found currently offers.
3) Ally ourselves with a popular MP3 player – A big part of iTunes success is its strong links and seamless operation with the iPod. By adopting a similar MP3 player, possibly the iRiver, we could tightly integrate our software with it, negotiate cross promotion and develop special terminals to sell our content in music stores, super markets, airports, train stations or anywhere else people are likely to be in need of quick entertainment.
4) Develop our international presence – We should extend the functionality of our site to allow it to sell music in many currencies. By accepting Euros and dollars we would be extending our potential customer base to twelve European countries, America and a number of smaller countries. This is potentially ten times as many customers.
5) Host a music community – We should allow customers to upload and sell their own content, taking a percentage of the income for administration. We could get a much better percentage of income from independent artists than we could off a major label with bargaining power and experience. Some of the artists we host may well end up becoming the next big thing. This would be great advertising for our company.
6) Incorporate gift vouchers, coupons and special offers – Gift vouchers are a popular Christmas present. Coupons distributed in the music culture magazines or by email like “Buy two tracks, get one free” or “First five tracks free when you sign up” would allow people to try our service before committing to it.
7) We could extend our system to recognise the sort of music a particular customer is likely to want based on past purchases. This would allow us to promote the right content to the right users so long as they’re logged in. Amazon has a similar technology built into their website and it has prompted me to buy books and DVD’s I wouldn’t have otherwise found. People often have very specific music tastes, so once we ascertain which genres of music a customer likes it will be a simple task to predict what they will purchase in the future.
Social/legal challenges
If we are to start selling internationally how should we approach pricing? The relative value of currencies changes daily. If for instance we were to offer our subscription service for £19.99 GBP per month, at the time of writing this would exchange to $35.00 USD and €30.00 EUR. When the exchange rate changes what should our policy be about updating prices? A policy that results in a rapidly changing price scheme will confuse our customers but a policy where prices can’t change quickly could result in us offering our service for too much or too little financial return. Another option would be for us to offer our service at different prices in different countries. This would allow us to better match the pricing trends in the local music industry. However, if we choose this option there is a possibility that our customers would start signing up in the region that has the lowest prices.
In order to implement a subscription service we will first need to negotiate a collective licensing scheme with the content owners. As discussed earlier a collective licensing scheme will likely lead to a reduced profit per track downloaded but an increased uptake of the service. We therefore have to convince the content owners that this model is potentially more profitable than the current model of setting a fixed price per unit or collection of music content.
We will need to protect the rights of the content owners by incorporating anti-piracy measures. Preventing piracy is a very difficult task that no one has yet mastered. Every time a new anti-piracy measure is introduced it is usually circumvented within three months (Moser, 2001). Apple currently uses fair play digital rights management and Napster currently uses Windows Media digital rights management. Both of these systems have already been circumvented. Content owners might not want a new service to operate on a security system that’s no longer effective.
Analysis of current system
The strategy we have developed over the last two years centres around selling songs on a price per song basis. This is one basic strategy that all online music vendors have adopted. One of the key factors in Apple’s success was its famously low 99 cents per song price tag. Because of this, we, like many other online music providers will find it very difficult to compete in pricing. According to popular legend, Apple secured this low price by refusing to sign the terms offered by the record labels then going ahead and launching iTunes anyway, daring the record labels to pull out. Labels have repeatedly tried to renegotiate this deal to no avail. None of the labels are willing to risk pulling out of the iTunes network and losing their foothold in the paid download business. As well as ‘pay per song’ there are a number of other tactics for selling music online. One method proposed by Ken Hertz, who represents Alanis Morrissette among other recording artists, is a flat fee collective licensing system. In flat fee collective licensing customers pay a fixed subscription fee to be allowed to download as much content as they want. This income is then divided among the content providers based on the percentage uptake of their content, as opposed to the unit uptake of their content (Fisher. WW, 2004). Fisher believes this model will lead to a reduced profit per song but an increased uptake of the service. This has already been shown to be an effective business model when applied to video rental. Having been pioneered by Blockbuster with their £13.99 online video rental service it has since been adopted by Amazon and Screen Select to provide similar services. I believe this model would be successful for us as it lets customers believe that by using the service regularly they are getting good value for money. Value for money has been a sticking point for music fans for a long time. Often, many people justify using illegal services like Emule or Limewire by claiming that the cost of purchasing music legally is excessive for
the product. The main problem with this model is that it would require the content owners (the record labels) to license their work to distributors.
Review of competitor activity
Our market is currently divided among a number of legal and illegal music services. These include services like Amazon where you can order a physical copy of a music CD online, services like the new Napster where you can search and download both free and paid for music and (semi) illegal services like Emule P2P where you can download anything you want for free.
Napster:
Napster has been involved in mainstream online digital distribution of music longer than any other company, and is arguably the most famous company in the field. Napster was launched as a free music sharing facility in 1999, and faced legal battles from the outset. It was finally forced to succumb to business pressure in 2001, at which point it began the six month process of re-inventing itself as a legal service. This brings us to the Napster we see today. Napster currently offers its customers three packages, ‘Napster To Go’ for £14.95 a month, Napster Membership for £9.95 and Napster lite. ‘Napster To Go’ and Napster Membership allows customers to download as much music as they like to their Computer. ‘Napster To Go’ also allows music to be downloaded via special terminals in high street stores and internet ready televisions direct to MP3 players. Napster Lite is Napster’s basic free package. It allows customers to pay for music on a per song basis at 79p. Customers can choose what they want by listening to 30 second segments of the content before they commit to purchasing. Napster lite also allows its users to access music stored on other Napster users shared space, but this is carefully screened to prevent piracy. One flaw in the Napster system is that in order to continue using the content you download, you need to keep paying for the service. This will to lead to consumer scepticism as people won’t like the idea of being trapped in the service to keep their music collection.
iTunes Music Store:
iTunes music store opened its door for service on 28th April 2003. Its strongest asset is its seamless operation with the Apple iPod. The iPod is the most popular MP3 player currently available and like most Apple products, usability was high on its priority list during development. The existing popularity of the iPod and iTunes combined made the process of extending the commercial attributes of the system a simple task. The music content is protected with Apple’s fair play digital rights management (DRM) but there are several hacks for this which Apple has so far been unsuccessful in blocking. Since its launch the range of features it offers has continued to expand. You can now buy gift certificates, download video and special content, create your own iTunes store and upload your own music via garage band, Apple’s music production suite. By allowing their users to produce and sell their own music apple has opened the door for their service to be used in many novel ways. For example Stanford has recently started using iTunes to freely distribute special academic and promotional content centred around learning and living on campus.
P2P networks:
The P2P networks are arguably the greatest threat to our model of business. Despite frequent law suites and attempts at sabotage record labels have been unable shut them down. A peer to peer network is basically a distributed file system where the shared content on every connected computer gets grouped together into one super directory. A search facility then allows connected users to find and download what they want to their local shared directory. As content gets spread across the network it becomes more accessible to other users. From the users perspective this has the advantage of being free, but the disadvantage of being unreliable. Content is often mis-labelled or incomplete. There are a number of tactics employed by content owners to further disrupt P2P activity including suing downloader’s, distributing mis-labelled content, and distributing content that harms a downloader’s computer. Record labels have also attempted to stop piracy at the source, by preventing users from uploading their music to their computers. However, this method has proven unsuccessful as it can be easily circumvented using real time encoding software, which encodes the music straight off the microphone jack. Record labels have also been sued by consumer rights groups and had their reputations tarnished over the legality of this tactic.
Online CD sales:
Among many consumers a consensus seems to have formed that paid music comes with CD and downloaded music is free. I personally like to have something physical to own when I purchase music. For this reason online CD sales are still very popular. CDNOW, Amazon and HMV online are some of the most popular retailers for this in the UK. A CD has the advantage of being a more tangible asset than a download and is therefore better suited to being given as a present, which will make a big difference to sales over the holiday season. It also doesn’t require the same expertise to use as a downloaded track. A CD essentially works like a little metal version of a vinyl. It is self explanatory to every generation how to make a CD player play a CD where as many people, particularly in older generations don’t know how to use a computer. This gives a CD a much wider potential audience. It may be beneficial for us to also consider selling music on physical media.
E-Commerce strategy
In order to plan our future direction we need to take stock of our current position. We can do this using a SWOT analysis.
Strengths:
1) We currently hold 35% of the UK downloaded music market, in business terms this equates to a majority. This is a large base of customers who will hopefully stay with us if we can continue to extend our services to compete with those of our competitors.
2) With the help of this plan we have a number of new revenue streams that we will hopefully implement soon. These will, if implemented properly, lead to an increase in our revenue and customer base.
Weaknesses:
1) We have not attempted to compete in the international downloaded music market. It makes no sense for us to only sell to UK customers. Traditional geographic limitations don’t apply on the internet. The complication of extending our system to sell music in many currencies is small compared to the benefit of increasing our potential customer base a hundred fold.
2) We don’t yet have systems in place to deal with things like gift vouchers or coupons that could be used for promotion.
Opportunities:
1) We current only allow our customers to purchase one song at a time off us. We could also allow them to purchase whole albums or customised content off us.
2) Although iTunes has secured a much better per song price than we could, they do not currently offer a subscription service. Our second most popular competitor, Napster does offer a subscription service but their customers have to continue paying for the service to continue using the content they’ve downloaded. If we can negotiate a subscription service that doesn’t lock the customer in we will be seen as the superior service.
3) iTunes is never advertised by itself. It’s always ‘iPod + iTunes’. If we can adopt a similar music player, develop our software to work seamlessly with it and negotiate cross promotion we will be doubling our exposure and simplifying the use of our service for the customers. This would also allow us to extend our service in a similar way to ‘Napster To Go’. We could begin to sell our content in high street stores using dedicated terminals or via internet television. This would allow our customer base to grow beyond the computer literate.
Threats:
1) File sharing networks offer the same service as us for free. Attempts to close these services down have so far been mostly ineffective. Although the close of Napster in 2001 was highly publicised it was ineffective as by this point many more services with more tenable legal position had emerged.
2) Many people expect to get something tangible like a CD or DVD when they buy music. One of the major tasks that faces the downloaded music industry is convincing people of the value of an intangible asset like a computer file.
3) Our primary competitors, Napster and iTunes continue have a larger international customer base than us. They have more exposure and more assets to extend their service with. We can’t hope to compete by trying to out compete in existing models, we need to develop new methods of selling music.
4) Our primary competitor, iTunes, has negotiated excellent prices with the content providers. Without the same economies of scale on our side it will be difficult to make the same deal.
In order to build what we have achieved so far I have compiled the following list of extension to our service that we could implement in the near future:
1) Develop a subscription service – We should develop a subscription service based on flat fee collective licensing that doesn’t trap customers in the same way as Napster’s services. This will be seen as a superior product by our target audience as it allows them to get good value for money from the service.
2) Custom CD service - In order to take advantage of gift buying in the holiday season, we should provide a service where customers select a set of tracks to be put on a CD or DVD, design a cover, and maybe add a personal message. The CD will then be burned and the packaging will be printed and sent to the customer for an additional fee. Basically what I’m proposing is a professionally produced version of a mix tape. This provides an extra income for us on top of the audio track sales and gives the customer something physical to give as a present. This is a service that none of the music-download companies I have found currently offers.
3) Ally ourselves with a popular MP3 player – A big part of iTunes success is its strong links and seamless operation with the iPod. By adopting a similar MP3 player, possibly the iRiver, we could tightly integrate our software with it, negotiate cross promotion and develop special terminals to sell our content in music stores, super markets, airports, train stations or anywhere else people are likely to be in need of quick entertainment.
4) Develop our international presence – We should extend the functionality of our site to allow it to sell music in many currencies. By accepting Euros and dollars we would be extending our potential customer base to twelve European countries, America and a number of smaller countries. This is potentially ten times as many customers.
5) Host a music community – We should allow customers to upload and sell their own content, taking a percentage of the income for administration. We could get a much better percentage of income from independent artists than we could off a major label with bargaining power and experience. Some of the artists we host may well end up becoming the next big thing. This would be great advertising for our company.
6) Incorporate gift vouchers, coupons and special offers – Gift vouchers are a popular Christmas present. Coupons distributed in the music culture magazines or by email like “Buy two tracks, get one free” or “First five tracks free when you sign up” would allow people to try our service before committing to it.
7) We could extend our system to recognise the sort of music a particular customer is likely to want based on past purchases. This would allow us to promote the right content to the right users so long as they’re logged in. Amazon has a similar technology built into their website and it has prompted me to buy books and DVD’s I wouldn’t have otherwise found. People often have very specific music tastes, so once we ascertain which genres of music a customer likes it will be a simple task to predict what they will purchase in the future.
Social/legal challenges
If we are to start selling internationally how should we approach pricing? The relative value of currencies changes daily. If for instance we were to offer our subscription service for £19.99 GBP per month, at the time of writing this would exchange to $35.00 USD and €30.00 EUR. When the exchange rate changes what should our policy be about updating prices? A policy that results in a rapidly changing price scheme will confuse our customers but a policy where prices can’t change quickly could result in us offering our service for too much or too little financial return. Another option would be for us to offer our service at different prices in different countries. This would allow us to better match the pricing trends in the local music industry. However, if we choose this option there is a possibility that our customers would start signing up in the region that has the lowest prices.
In order to implement a subscription service we will first need to negotiate a collective licensing scheme with the content owners. As discussed earlier a collective licensing scheme will likely lead to a reduced profit per track downloaded but an increased uptake of the service. We therefore have to convince the content owners that this model is potentially more profitable than the current model of setting a fixed price per unit or collection of music content.
We will need to protect the rights of the content owners by incorporating anti-piracy measures. Preventing piracy is a very difficult task that no one has yet mastered. Every time a new anti-piracy measure is introduced it is usually circumvented within three months (Moser, 2001). Apple currently uses fair play digital rights management and Napster currently uses Windows Media digital rights management. Both of these systems have already been circumvented. Content owners might not want a new service to operate on a security system that’s no longer effective.
Ecommerce Implementation - A case study Technical Report
The e-commerce implementation at CanCric
Sam Harnett
Document Introduction
This document presents the analysis of an e-commerce implementation in the CanCric, a Canterbury based company that sells cricket products to sports stores around the UK. This document will aim to give a more inside picture of the e-commerce implementation the management has envisaged and will provide a basis for the feasibility analysis of the project.
Document Audience
This document has been prepared primarily for the senior management. However, as there are no current employees who have sufficient in-depth expertise to develop the application ‘in-house’, it is likely that a commercial service provider will be contracted for the job. This document will provide the basic understanding of “what to do” for the contractor.
Document Contents
This document contains analysis of the CanCric e-commerce implementation and following artefacts will be included in the overall report.
· Project Introduction
· Project Benefits
· System Requirements
· Implementation Plan
· Risk Assessment
· Conclusion
Project Introduction
CanCric is a Canterbury based company that sells cricket products to sports stores around the UK. Their products include crickets equipment (gloves, helmets, balls, bats, pads, shocks etc.) and memorabilia (flags, T-Shirts, prints etc.)
CanCric is a market leader because of its high quality products and listening to its customers. The company wants to further improve the quality of its products and create a stronger relationship with its customers by implementing an e-commerce system.
Project Benefits
The project will bring following envisaged benefits to the company:
1. The new website will provide a better marketing place for the company to capture much more business by publicizing their products on internet.
2. The company will be able to increase its profit by the increasing sales. Increase in sales is part and parcel with the e-commerce implementation.
3. The company will be able to expand the size of the market from regional to national or national to international. Currently the company is only selling its products within the UK.
4. The company will be able to decrease costs in different areas elicited below:
a. Costs of creating the product
b. Cost of Marketing since online marketing is cheaper than other print media
c. Costs of distribution in few cases, like providing online brochures will reduce cost of printing and shipping information to prospective clients
d. Costs of processing orders from the customers are the most important cost reduction the project will achieve. The online application will ease the repeating activities and information processing for the overall order processing.
e. The cost of handling customer phone calls will also be reduced by providing online discussion forums and suggestion boxes which most of the customers will willfully accept.
f. Online presence of information will decrease the manual handling of sales inquiries, thus reducing a lot of cost in terms of time and resources.
g. E-commerce project implementations can be enhanced to perform thousands of other operations to provide the user with very rich information. One is the inventory forecast. E-commerce systems can be enhanced to forecast the sales in the upcoming months by analyzing the past trends. This can greatly reduce the cost of extra inventory management.
5. Electronic content management services will be provided to reduce the cost of content retrieval and provide security and controlled access rights features.
6. Websites provide easy change management. The point here is that with a web site, you can have the prices listed, and change them - you simply edit the web page - whereas in a printed catalogue you are stuck with the expense of printing a new version if you need to change many of the prices.
7. Websites enables customization of products if CanCric ever want to support it.
8. The project will enable CanCric to build more collaborative and stronger relationships with its suppliers. This includes streamlining and automating the underlying business processes, enabling areas such as
a. Online procurement of material
b. Replenishment
c. Information Management
d. Reduced inventories due to efficient information exchange
e. Reduced delivery delays.
System Requirements
Given below are the requirements of the system, listed with respect to the modules in the system
Web Site
1. The website of the system should be very user friendly
2. The website should show the overall mission statement and vision of the company
3. The website should clearly give a complete catalogue of all products the company is manufacturing, along with the pictures of the product.
4. The website should also highlight the future plans of CanCric about the product expansions.
5. The search functionality should be provided on the website
6. The website search should be to display the closest results to the search strings
7. The search functionality should support input of multiple search criteria.
8. The functionality to display the whole site map of the website should be provided.
9. The website should also be maintaining the sponsored players’ information
10. The sponsored players’ information should be shown in the column form
Discussion Boards
11. The website should also have functionality of online discussion boards
12. The registered users should be able to participate in the discussions
13. Any visitor can see the discussions
14. The system should be able to maintain discussions in the form of tree and chronological order
15. User should be able to delete the post from discussion boards
16. User should not be able to edit the discussion posts
17. Administrator should be able to administer the discussion boards.
Online Order Processing
18. The website should allow users to maintain their user profiles.
19. The website should make sure that the user information will not be misused.
20. The website should provide the facility to the user to pre-enter the credit card information of the user for the future use by the user
21. The website should allow the user to maintain a shopping cart along with its profile
22. The user should be able to add cricket products in its shopping cart.
23. The user should be shown the overall cost of the current selection in the shopping cart
24. The user should be able to delete entries from the shopping cart
25. The user should be able to preview the shopping cart any time it want to
26. The user should be able to save the current shopping cart for future use
27. The facility to buy the items in the shopping cart should be provided with complete security measures.
28. The shopping cart should be empty after the transaction
29. The delivery options should be shown to the user.
30. The user should be given option to choose a courier company of its choice.
31. The user should be able to change the billing details any time during the whole procedure of online order processing.
32. The payment options available should include:
* Payment through personal credit card
* Payment through company credit card
* Payment through the monthly billing option
Research and development
33. The option to host a online survey should be provided
34. The data collected from surveys should be available for reporting and analysis purposes
35. Support to generate reports on given criteria should be provided in the system
36. Multiple types of reports should be available in the system
37. The sales data, the visit history on the websites and the survey results; all should be available for the analysis purposes.
38. For the analysis purposes, application should support technologies like data mining and data warehousing.
Implementation Plan
The iteration from current business to new computerized system will take following steps to be completed:
1. Training of resources who will be using the new application
2. Development of confidence for the new system among the CanCric employees
3. Web Hosting Plan
Web Hosting
E-Commerce Web hosting is adding an e-commerce application to a Web hosting package, which is what allows you to have a store on the Internet. To have a Web site or store on the Internet we will need a Web server. Unfortunately, owning a Web server can be very costly and requires technical expertise that CanCric lacks. So, we will have to outsource this management and hosting to some Web Hosting Company. Web hosting companies will provide the equipment and other technical resources.
I have identified these ecommerce packages and hosting plans as suitable for CanCric’s needs.
Hosting:
1. http://www.bluehost.com – This site offers 10 GB of storage and 250GB of transfer each month as well as SSL, PHP and MySQL, which are commonly required for a lot of ecommerce applications. It also has an excellent account administration system that incorporates management of databases and private server settings. The package also comes with free wiki and forum software that can be easily integrated. Bluehost is available from £30 per year.
2. http://www.newnet.co.uk – Newnet offers hosing packages from £57 per year and collocation from £150. Although our needs don’t require co-location yet, having the option to upgrade to co-location on the same premises will be an asset. The host offers scripting and SSL services necessary for ecommerce and a managed back up timetable to secure our businesses data.
3. http://www.ukip.co.uk – UKIP is a broadband provider that also offers hosting services. They offer the same hosting scripting services as Bluehost and Newnet, they also offer support for active server pages and java server pages which will allow us to consider a wider range of ecommerce applications.
Ecommerce:
1. http://www.actinic.co.uk – Actinic is one of the most popular systems currently available with over 10,000 users in 40 countries. Actinic can integrate with dreamweaver MX and has a range of features not just suitable to the ecommerce but also to many other facets of business management such as stock monitoring. Actinic Business is suitable for our level of development as cost £799 per license. Actinic is also capable of both B2B and B2C practises.
2. http://www.storefront.net – Storefront is an API that can be integrated into dreamweaver or front page or used independently. It provides mechanisms form displaying products and managing a shopping cart. It is designed to allow maximum creativity for the designer while still taking part of the technicalities such as shopping and paying.
Risk Assessment
There are also a few risks associated with the e-commerce system implementation at CanCric, which must be mitigated to make the project successful.
1. The hardware and network infrastructure should be sufficient to run and access the system
2. A level of confidence must be built within the company for the usage of the new system. The employees are expected to resist to such change in the organizations.
3. Customer fear of personal information being used wrongly. Such issues should be catered in the system.
4. Rules and regulations must be redefined according the new system.
5. Security and privacy must be handled carefully since the CanCric system involves online transaction support.
Conclusion
The e-commerce implementation will definitely provide CanCric an excellent opportunity to capture the international markets of cricket goods. However, there are few things we should take care of; first of all we must avoid over-expecting e-commerce lest we create a backlash when its promises are not fulfilled. E-commerce will not, in and of itself, correct all the businesses problems. Secondly, the benefits of e-commerce will be restricted if we do not recognize its full system implications, and instead implement it in limited ways that only partially meet its requirements.
Sam Harnett
Document Introduction
This document presents the analysis of an e-commerce implementation in the CanCric, a Canterbury based company that sells cricket products to sports stores around the UK. This document will aim to give a more inside picture of the e-commerce implementation the management has envisaged and will provide a basis for the feasibility analysis of the project.
Document Audience
This document has been prepared primarily for the senior management. However, as there are no current employees who have sufficient in-depth expertise to develop the application ‘in-house’, it is likely that a commercial service provider will be contracted for the job. This document will provide the basic understanding of “what to do” for the contractor.
Document Contents
This document contains analysis of the CanCric e-commerce implementation and following artefacts will be included in the overall report.
· Project Introduction
· Project Benefits
· System Requirements
· Implementation Plan
· Risk Assessment
· Conclusion
Project Introduction
CanCric is a Canterbury based company that sells cricket products to sports stores around the UK. Their products include crickets equipment (gloves, helmets, balls, bats, pads, shocks etc.) and memorabilia (flags, T-Shirts, prints etc.)
CanCric is a market leader because of its high quality products and listening to its customers. The company wants to further improve the quality of its products and create a stronger relationship with its customers by implementing an e-commerce system.
Project Benefits
The project will bring following envisaged benefits to the company:
1. The new website will provide a better marketing place for the company to capture much more business by publicizing their products on internet.
2. The company will be able to increase its profit by the increasing sales. Increase in sales is part and parcel with the e-commerce implementation.
3. The company will be able to expand the size of the market from regional to national or national to international. Currently the company is only selling its products within the UK.
4. The company will be able to decrease costs in different areas elicited below:
a. Costs of creating the product
b. Cost of Marketing since online marketing is cheaper than other print media
c. Costs of distribution in few cases, like providing online brochures will reduce cost of printing and shipping information to prospective clients
d. Costs of processing orders from the customers are the most important cost reduction the project will achieve. The online application will ease the repeating activities and information processing for the overall order processing.
e. The cost of handling customer phone calls will also be reduced by providing online discussion forums and suggestion boxes which most of the customers will willfully accept.
f. Online presence of information will decrease the manual handling of sales inquiries, thus reducing a lot of cost in terms of time and resources.
g. E-commerce project implementations can be enhanced to perform thousands of other operations to provide the user with very rich information. One is the inventory forecast. E-commerce systems can be enhanced to forecast the sales in the upcoming months by analyzing the past trends. This can greatly reduce the cost of extra inventory management.
5. Electronic content management services will be provided to reduce the cost of content retrieval and provide security and controlled access rights features.
6. Websites provide easy change management. The point here is that with a web site, you can have the prices listed, and change them - you simply edit the web page - whereas in a printed catalogue you are stuck with the expense of printing a new version if you need to change many of the prices.
7. Websites enables customization of products if CanCric ever want to support it.
8. The project will enable CanCric to build more collaborative and stronger relationships with its suppliers. This includes streamlining and automating the underlying business processes, enabling areas such as
a. Online procurement of material
b. Replenishment
c. Information Management
d. Reduced inventories due to efficient information exchange
e. Reduced delivery delays.
System Requirements
Given below are the requirements of the system, listed with respect to the modules in the system
Web Site
1. The website of the system should be very user friendly
2. The website should show the overall mission statement and vision of the company
3. The website should clearly give a complete catalogue of all products the company is manufacturing, along with the pictures of the product.
4. The website should also highlight the future plans of CanCric about the product expansions.
5. The search functionality should be provided on the website
6. The website search should be to display the closest results to the search strings
7. The search functionality should support input of multiple search criteria.
8. The functionality to display the whole site map of the website should be provided.
9. The website should also be maintaining the sponsored players’ information
10. The sponsored players’ information should be shown in the column form
Discussion Boards
11. The website should also have functionality of online discussion boards
12. The registered users should be able to participate in the discussions
13. Any visitor can see the discussions
14. The system should be able to maintain discussions in the form of tree and chronological order
15. User should be able to delete the post from discussion boards
16. User should not be able to edit the discussion posts
17. Administrator should be able to administer the discussion boards.
Online Order Processing
18. The website should allow users to maintain their user profiles.
19. The website should make sure that the user information will not be misused.
20. The website should provide the facility to the user to pre-enter the credit card information of the user for the future use by the user
21. The website should allow the user to maintain a shopping cart along with its profile
22. The user should be able to add cricket products in its shopping cart.
23. The user should be shown the overall cost of the current selection in the shopping cart
24. The user should be able to delete entries from the shopping cart
25. The user should be able to preview the shopping cart any time it want to
26. The user should be able to save the current shopping cart for future use
27. The facility to buy the items in the shopping cart should be provided with complete security measures.
28. The shopping cart should be empty after the transaction
29. The delivery options should be shown to the user.
30. The user should be given option to choose a courier company of its choice.
31. The user should be able to change the billing details any time during the whole procedure of online order processing.
32. The payment options available should include:
* Payment through personal credit card
* Payment through company credit card
* Payment through the monthly billing option
Research and development
33. The option to host a online survey should be provided
34. The data collected from surveys should be available for reporting and analysis purposes
35. Support to generate reports on given criteria should be provided in the system
36. Multiple types of reports should be available in the system
37. The sales data, the visit history on the websites and the survey results; all should be available for the analysis purposes.
38. For the analysis purposes, application should support technologies like data mining and data warehousing.
Implementation Plan
The iteration from current business to new computerized system will take following steps to be completed:
1. Training of resources who will be using the new application
2. Development of confidence for the new system among the CanCric employees
3. Web Hosting Plan
Web Hosting
E-Commerce Web hosting is adding an e-commerce application to a Web hosting package, which is what allows you to have a store on the Internet. To have a Web site or store on the Internet we will need a Web server. Unfortunately, owning a Web server can be very costly and requires technical expertise that CanCric lacks. So, we will have to outsource this management and hosting to some Web Hosting Company. Web hosting companies will provide the equipment and other technical resources.
I have identified these ecommerce packages and hosting plans as suitable for CanCric’s needs.
Hosting:
1. http://www.bluehost.com – This site offers 10 GB of storage and 250GB of transfer each month as well as SSL, PHP and MySQL, which are commonly required for a lot of ecommerce applications. It also has an excellent account administration system that incorporates management of databases and private server settings. The package also comes with free wiki and forum software that can be easily integrated. Bluehost is available from £30 per year.
2. http://www.newnet.co.uk – Newnet offers hosing packages from £57 per year and collocation from £150. Although our needs don’t require co-location yet, having the option to upgrade to co-location on the same premises will be an asset. The host offers scripting and SSL services necessary for ecommerce and a managed back up timetable to secure our businesses data.
3. http://www.ukip.co.uk – UKIP is a broadband provider that also offers hosting services. They offer the same hosting scripting services as Bluehost and Newnet, they also offer support for active server pages and java server pages which will allow us to consider a wider range of ecommerce applications.
Ecommerce:
1. http://www.actinic.co.uk – Actinic is one of the most popular systems currently available with over 10,000 users in 40 countries. Actinic can integrate with dreamweaver MX and has a range of features not just suitable to the ecommerce but also to many other facets of business management such as stock monitoring. Actinic Business is suitable for our level of development as cost £799 per license. Actinic is also capable of both B2B and B2C practises.
2. http://www.storefront.net – Storefront is an API that can be integrated into dreamweaver or front page or used independently. It provides mechanisms form displaying products and managing a shopping cart. It is designed to allow maximum creativity for the designer while still taking part of the technicalities such as shopping and paying.
Risk Assessment
There are also a few risks associated with the e-commerce system implementation at CanCric, which must be mitigated to make the project successful.
1. The hardware and network infrastructure should be sufficient to run and access the system
2. A level of confidence must be built within the company for the usage of the new system. The employees are expected to resist to such change in the organizations.
3. Customer fear of personal information being used wrongly. Such issues should be catered in the system.
4. Rules and regulations must be redefined according the new system.
5. Security and privacy must be handled carefully since the CanCric system involves online transaction support.
Conclusion
The e-commerce implementation will definitely provide CanCric an excellent opportunity to capture the international markets of cricket goods. However, there are few things we should take care of; first of all we must avoid over-expecting e-commerce lest we create a backlash when its promises are not fulfilled. E-commerce will not, in and of itself, correct all the businesses problems. Secondly, the benefits of e-commerce will be restricted if we do not recognize its full system implications, and instead implement it in limited ways that only partially meet its requirements.
Operating Systems: File Systems
File systems are an integral part of any operating systems with the capacity for long term storage. There are two distinct parts of a file system, the mechanism for storing files and the directory structure into which they are organised. In mordern operating systems where it is possibe for several user to access the same files simultaneously it has also become necessary for such features as access control and different forms of file protection to be implemented.
A file is a collection of binary data. A file could represent a program, a document or in some cases part of the file system itself. In modern computing it is quite common for their to be several different storage devices attached to the same computer. A common data structure such as a file system allows the computer to access many different storage devices in the same way, for example, when you look at the contents of a hard drive or a cd you view it through the same interface even though they are completely different mediums with data mapped on them in completely different ways. Files can have very different data structures within them but can all be accessed by the same methods built into the file system. The arrangment of data within the file is then decided by the program creating it. The file systems also stores a number of attributes for the files within it.
All files have a name by which they can be accessed by the user. In most modern file systems the name consists of of three parts, its unique name, a period and an extension. For example the file 'bob.jpg' is uniquely identified by the first word 'bob', the extension jpg indicates that it is a jpeg image file. The file extension allows the operating system to decide what to do with the file if someone tries to open it. The operating system maintains a list of file extension associations. Should a user try to access 'bob.jpg' then it would most likely be opened in whatever the systems default image viewer is.
The system also stores the location of a file. In some file systems files can only be stored as one contigious block. This has simplifies storage and access to the file as the system then only needs to know where the file begins on the disk and how large it is. It does however lead to complications if the file is to be extended or removed as there may not be enough space available to fit the larger version of the file. Most modern file systems overcome this problem by using linked file allocation. This allows the file to be stored in any number of segments. The file system then has to store where every block of the file is and how large they are. This greatly simplifies file space allocation but is slower than contigious allocation as it is possible for the file to be spread out all over the disk. Modern oparating systems overome this flaw by providing a disk defragmenter. This is a utility that rearranges all the files on the disk so that thay are all in contigious blocks.
Information about the files protection is also integrated into the file system. Protection can range from the simple systems implemented in the FAT system of early windows where files could be marked as read-only or hidden to the more secure systems implemented in NTFS where the file system administrator can set up separate read and write access rights for different users or user groups. Although file protection adds a great deal of complexity and potential difficulties it is essential in an enviroment where many different computers or user can have access to the same drives via a network or time shared system such as raptor.
Some file systems also store data about which user created a file and at what time they created it. Although this is not essential to the running of the file system it is useful to the users of the system.
In order for a file system to function properly they need a number of defined operations for creating, opening and editing a file. Almost all file systems provide the same basic set of methods for manipulating files.
A file system must be able to create a file. To do this there must be enough space left on the drive to fit the file. There must also be no other file in the directory it is to be placed with the same name. Once the file is created the system will make a record of all the attributes noted above.
Once a file has been created we may need to edit it. This may be simply appending some data to the end of it or removing or replacing data already stored within it. When doing this the system keeps a write pointer marking where the next write oparation to the file should take place.
In order for a file to be useful it must of course be readable. To do this all you need to know the name and path of the file. From this the file system can ascertain where on the drive the file is stored. While reading a file the system keeps a read pointer. This stores which part of the drive is to be read next.
In some cases it is not possible to simply read all of the file into memory. File systems also allow you to reposition the read pointer within a file. To perform this operation the system needs to know how far into the file you want the read pointer to jump. An example of where this would be useful is a database system. When a query is made on the database it is obviously ineficient to read the whole file up to the point where the reuired data is, instead the application managing the database would determine where in the file the required bit of data is and jump to it. This operation is often known as a file seek.
File systems also allow you to delete files. To do this it needs to know the name and path of the file. To delete a file the systems simply removes its entry from the directory structure and adds all the space it previously occupied to the free space list (or whatever other free space management system it uses).
These are the most basic operations required by a file system to function properly. They are present in all modern computer file systems but the way they function may vary. For example, to perform the delete file operation in a modern file system like NTFS that has file protection built into it would be more complicated than the same operation in an older file system like FAT. Both systems would first check to see whether the file was in use before continuing, NTFS would then have to check whether the user currently deleting the file has permission to do so. Some file systems also allow multiple people to open the same file simultaneously and have to decide whether users have permission to write a file back to the disk if other users currently have it open. If two users have read and write permission to file should one be allowed to overwrite it while the other still has it open? Or if one user has read-write permission and another only has read permission on a file should the user with write permission be allowed to overwrite it if theres no chance of the other user also trying to do so?
Different file systems also support different access methods. The simplest method of accessing information in a file is sequential access. This is where the information in a file is accessed from the beginning one record at a time. To change the position in a file it can be rewound or forwarded a number of records or reset to the beginning of the file. This access method is based on file storage systems for tape drive but works as well on sequential access devices (like mordern DAT tape drives) as it does on random-access ones (like hard drives). Although this method is very simple in its operation and ideally suited for certain tasks such as playing media it is very inneficient for more complex tasks such as database management. A more modern approach that better facilitates reading tasks that arent likely to be sequential is direct access. direct access allows records to be read or written over in any order the application requires. This method of allowing any part of the file to be read in any order is better suited to modern hard drives as they too allow any part of the drive to be read in any order with little reduction in transfer rate. Direct access is better suited to to most applications than sequential access as it is designed around the most common storage medium in use today as opposed to one that isnt used very much anymore except for large offline back-ups. Given the way direct access works it is also possible to build other access methods on top of direct access such as sequential access or creating an index of all the records of the file speeding to speed up finding data in a file.
On top of storing and managing files on a drive the file system also maintains a system of directories in which the files are referenced. Modern hard drives store hundreds of gigabytes. The file system helps organise this data by dividing it up into directories. A directory can contain files or more directories. Like files there are several basic operation that a file system needs to a be able to perform on its directory structure to function properly.
It needs to be able to create a file. This is also covered by the overview of peration on a file but as well as creating the file it needs to be added to the directory structure.
When a file is deleted the space taken up by the file needs to be marked as free space. The file itself also needs to be removed from the directory structure.
Files may need to be renamed. This requires an alteration to the directory structure but the file itself remains un-changed.
List a directory. In order to use the disk properly the user will require to know whats in all the diretories stored on it. On top of this the user needs to be able to browse through the directories on the hard drive.
Since the first directory structures were designed they have gone through several large evolutions. Before directory structures were applied to file systems all files were stored on the same level. This is basically a system with one directory in which all the files are kept. The next advancement on this which would be considered the first directory structure is the two level directory. In this There is a singe list of directories which are all on the same level. The files are then stored in these directories. This allows different users and applications to store there files separately. After this came the first directory structures as we know them today, directory trees. Tree structure directories improves on two level directories by allowing directories as well as files to be stored in directories. All modern file systems use tree structore directories, but many have additional features such as security built on top of them.
Protection can be implemented in many ways. Some file systems allow you to have password protected directories. In this system. The file system wont allow you to access a directory before it is given a username and password for it. Others extend this system by given different users or groups access permissions. The operating system requires the user to log in before using the computer and then restrict their access to areas they dont have permission for. The system used by the computer science department for storage space and coursework submission on raptor is a good example of this. In a file system like NTFS all type of storage space, network access and use of device such as printers can be controlled in this way. Other types of access control can also be implemented outside of the file system. For example applications such as win zip allow you to password protect files.
There are many different file systems currently available to us on many different platforms and depending on the type of application and size of drive different situations suit different file system. If you were to design a file system for a tape backup system then a sequential access method would be better suited than a direct access method given the constraints of the hardware. Also if you had a small hard drive on a home computer then there would be no real advantage of using a more complex file system with features such as protection as it isn't likely to be needed. If i were to design a file system for a 10 gigabyte drive i would use linked allocation over contigious to make the most efficient use the drive space and limit the time needed to maintain the drive. I would also design a direct access method over a sequential access one to make the most use of the strengths of the hardware. The directory structure would be tree based to allow better organisation of information on the drive and would allow for acyclic directories to make it easier for several users to work on the same project. It would also have a file protection system that allowed for different access rights for different groups of users and password protection on directories and individual files.Several file systems that already implement the features ive decribed above as ideal for a 10gig hard drive are currently available, these include NTFS for the Windows NT and XP operating systems and ext2 which is used in linux.
A file is a collection of binary data. A file could represent a program, a document or in some cases part of the file system itself. In modern computing it is quite common for their to be several different storage devices attached to the same computer. A common data structure such as a file system allows the computer to access many different storage devices in the same way, for example, when you look at the contents of a hard drive or a cd you view it through the same interface even though they are completely different mediums with data mapped on them in completely different ways. Files can have very different data structures within them but can all be accessed by the same methods built into the file system. The arrangment of data within the file is then decided by the program creating it. The file systems also stores a number of attributes for the files within it.
All files have a name by which they can be accessed by the user. In most modern file systems the name consists of of three parts, its unique name, a period and an extension. For example the file 'bob.jpg' is uniquely identified by the first word 'bob', the extension jpg indicates that it is a jpeg image file. The file extension allows the operating system to decide what to do with the file if someone tries to open it. The operating system maintains a list of file extension associations. Should a user try to access 'bob.jpg' then it would most likely be opened in whatever the systems default image viewer is.
The system also stores the location of a file. In some file systems files can only be stored as one contigious block. This has simplifies storage and access to the file as the system then only needs to know where the file begins on the disk and how large it is. It does however lead to complications if the file is to be extended or removed as there may not be enough space available to fit the larger version of the file. Most modern file systems overcome this problem by using linked file allocation. This allows the file to be stored in any number of segments. The file system then has to store where every block of the file is and how large they are. This greatly simplifies file space allocation but is slower than contigious allocation as it is possible for the file to be spread out all over the disk. Modern oparating systems overome this flaw by providing a disk defragmenter. This is a utility that rearranges all the files on the disk so that thay are all in contigious blocks.
Information about the files protection is also integrated into the file system. Protection can range from the simple systems implemented in the FAT system of early windows where files could be marked as read-only or hidden to the more secure systems implemented in NTFS where the file system administrator can set up separate read and write access rights for different users or user groups. Although file protection adds a great deal of complexity and potential difficulties it is essential in an enviroment where many different computers or user can have access to the same drives via a network or time shared system such as raptor.
Some file systems also store data about which user created a file and at what time they created it. Although this is not essential to the running of the file system it is useful to the users of the system.
In order for a file system to function properly they need a number of defined operations for creating, opening and editing a file. Almost all file systems provide the same basic set of methods for manipulating files.
A file system must be able to create a file. To do this there must be enough space left on the drive to fit the file. There must also be no other file in the directory it is to be placed with the same name. Once the file is created the system will make a record of all the attributes noted above.
Once a file has been created we may need to edit it. This may be simply appending some data to the end of it or removing or replacing data already stored within it. When doing this the system keeps a write pointer marking where the next write oparation to the file should take place.
In order for a file to be useful it must of course be readable. To do this all you need to know the name and path of the file. From this the file system can ascertain where on the drive the file is stored. While reading a file the system keeps a read pointer. This stores which part of the drive is to be read next.
In some cases it is not possible to simply read all of the file into memory. File systems also allow you to reposition the read pointer within a file. To perform this operation the system needs to know how far into the file you want the read pointer to jump. An example of where this would be useful is a database system. When a query is made on the database it is obviously ineficient to read the whole file up to the point where the reuired data is, instead the application managing the database would determine where in the file the required bit of data is and jump to it. This operation is often known as a file seek.
File systems also allow you to delete files. To do this it needs to know the name and path of the file. To delete a file the systems simply removes its entry from the directory structure and adds all the space it previously occupied to the free space list (or whatever other free space management system it uses).
These are the most basic operations required by a file system to function properly. They are present in all modern computer file systems but the way they function may vary. For example, to perform the delete file operation in a modern file system like NTFS that has file protection built into it would be more complicated than the same operation in an older file system like FAT. Both systems would first check to see whether the file was in use before continuing, NTFS would then have to check whether the user currently deleting the file has permission to do so. Some file systems also allow multiple people to open the same file simultaneously and have to decide whether users have permission to write a file back to the disk if other users currently have it open. If two users have read and write permission to file should one be allowed to overwrite it while the other still has it open? Or if one user has read-write permission and another only has read permission on a file should the user with write permission be allowed to overwrite it if theres no chance of the other user also trying to do so?
Different file systems also support different access methods. The simplest method of accessing information in a file is sequential access. This is where the information in a file is accessed from the beginning one record at a time. To change the position in a file it can be rewound or forwarded a number of records or reset to the beginning of the file. This access method is based on file storage systems for tape drive but works as well on sequential access devices (like mordern DAT tape drives) as it does on random-access ones (like hard drives). Although this method is very simple in its operation and ideally suited for certain tasks such as playing media it is very inneficient for more complex tasks such as database management. A more modern approach that better facilitates reading tasks that arent likely to be sequential is direct access. direct access allows records to be read or written over in any order the application requires. This method of allowing any part of the file to be read in any order is better suited to modern hard drives as they too allow any part of the drive to be read in any order with little reduction in transfer rate. Direct access is better suited to to most applications than sequential access as it is designed around the most common storage medium in use today as opposed to one that isnt used very much anymore except for large offline back-ups. Given the way direct access works it is also possible to build other access methods on top of direct access such as sequential access or creating an index of all the records of the file speeding to speed up finding data in a file.
On top of storing and managing files on a drive the file system also maintains a system of directories in which the files are referenced. Modern hard drives store hundreds of gigabytes. The file system helps organise this data by dividing it up into directories. A directory can contain files or more directories. Like files there are several basic operation that a file system needs to a be able to perform on its directory structure to function properly.
It needs to be able to create a file. This is also covered by the overview of peration on a file but as well as creating the file it needs to be added to the directory structure.
When a file is deleted the space taken up by the file needs to be marked as free space. The file itself also needs to be removed from the directory structure.
Files may need to be renamed. This requires an alteration to the directory structure but the file itself remains un-changed.
List a directory. In order to use the disk properly the user will require to know whats in all the diretories stored on it. On top of this the user needs to be able to browse through the directories on the hard drive.
Since the first directory structures were designed they have gone through several large evolutions. Before directory structures were applied to file systems all files were stored on the same level. This is basically a system with one directory in which all the files are kept. The next advancement on this which would be considered the first directory structure is the two level directory. In this There is a singe list of directories which are all on the same level. The files are then stored in these directories. This allows different users and applications to store there files separately. After this came the first directory structures as we know them today, directory trees. Tree structure directories improves on two level directories by allowing directories as well as files to be stored in directories. All modern file systems use tree structore directories, but many have additional features such as security built on top of them.
Protection can be implemented in many ways. Some file systems allow you to have password protected directories. In this system. The file system wont allow you to access a directory before it is given a username and password for it. Others extend this system by given different users or groups access permissions. The operating system requires the user to log in before using the computer and then restrict their access to areas they dont have permission for. The system used by the computer science department for storage space and coursework submission on raptor is a good example of this. In a file system like NTFS all type of storage space, network access and use of device such as printers can be controlled in this way. Other types of access control can also be implemented outside of the file system. For example applications such as win zip allow you to password protect files.
There are many different file systems currently available to us on many different platforms and depending on the type of application and size of drive different situations suit different file system. If you were to design a file system for a tape backup system then a sequential access method would be better suited than a direct access method given the constraints of the hardware. Also if you had a small hard drive on a home computer then there would be no real advantage of using a more complex file system with features such as protection as it isn't likely to be needed. If i were to design a file system for a 10 gigabyte drive i would use linked allocation over contigious to make the most efficient use the drive space and limit the time needed to maintain the drive. I would also design a direct access method over a sequential access one to make the most use of the strengths of the hardware. The directory structure would be tree based to allow better organisation of information on the drive and would allow for acyclic directories to make it easier for several users to work on the same project. It would also have a file protection system that allowed for different access rights for different groups of users and password protection on directories and individual files.Several file systems that already implement the features ive decribed above as ideal for a 10gig hard drive are currently available, these include NTFS for the Windows NT and XP operating systems and ext2 which is used in linux.
Understanding The Specifications Puzzle
"You like potato and I like potahto
You like tomato and I like tomahto
Potato, potahto, Tomato, tomahto."
Let's call the whole thing off."
- Lyrics by Ira Gershwin; Music by George Gershwin
Defining specifications for the design and development of systems and software is a lot like this classic Gershwin song and what I personally regard as the biggest cause of confusion in the Information Technology field for as long as I can remember, which is over 30 years in the industry. Some people say specifications should be based on the inherent properties of information, others believe it is based on a screen/report or file layout, yet others adamantly believe it should be based on process and data specifications. Interestingly, all are absolutely correct. The difference lies in the perspective of the person and the work to be performed. For example, how we define specifications for the design of an automobile is certainly different than how we specify a skyscraper. The same is true in the I.T. field where we have different things to be produced by different people; for example:
1. THE PROGRAMMER (aka, Software Engineer) requires precise specifications in order to develop program code (source and object). This normally takes the form of processing requirements (e.g., hardware configuration, types of transactions to be processed, volume, timing, messages, etc.) and physical data requirements (input/output/file layouts).
2. DBA (Data Base Administrator) requires precise specifications in order to select a suitable file management technique (e.g., DBMS) and produce the necessary Data Definition Language (DDL) for it. This normally takes the form of a logical data base model representing relationships between data entities.
3. THE ANALYST (aka, Systems Analyst, Systems Engineer, Systems Architect, Business Analyst) - requires specifications about the end-User's information requirements in order to design a system solution. This is normally based on a definition of the user's business actions and/or decisions to be supported. Following the system design, the Analyst produces the specifications required by the Programmer and DBA to fulfill their part of the puzzle. From this perspective, the Analyst is the translator between the end-User and the Programmers and DBAs.
Each party has his own unique perspective of the puzzle and, as such, requires different "specifications." To compound the problem though, the role of the Analyst sharply diminished over the years, leaving it to the Programmers to try and determine what the end-User needs, a skill they are typically not trained or suited for. To illustrate, I am reminded of the story of the IT Director at a shoe manufacturing company who received a call from the corporate Sales Manager asking for some help on a pressing problem. The IT Director sent over one of his programmers to meet with the Sales Manager and discuss the problem. Basically, the manager wanted a printout of all shoe sales sorted by model, volume, type, color, etc. The programmer immediately knew how to access the necessary data and sorted it accordingly thereby producing a voluminous printout (three feet high) which he dutifully delivered to the user.
The IT Director stopped by the Sales Manager's office a few days later to inquire if the programmer had adequately serviced the user. The sales manager afforded the programmer accolades on his performance and proudly pointed at the impressively thick printout sitting on his desk. The IT Director then asked how the manager used the printout. He explained he took it home over the weekend, slowly sifted through the data, and built a report from it showing sales trends.
"Did you explain to the programmer you were going to do this?" asked the IT Director.
"No," replied the Sales Manager.
"Aren't you aware we could have produced your report for you and saved you a lot of time and effort?"
"No."
This is a classic example of the blind leading the blind. The user did not know how to adequately describe the business problem, and the programmer asked the wrong questions. Remarkably, both the Sales Manager and programmer were delighted with the results. The IT Director simply shook his head in disbelief.
There are substantial differences between specifying information requirements and specifying software. Both have their place, but both serve different purposes. Whereas a true Analyst investigates the underlying business rationale of the information, the Programmer lives in the physical world and is only concerned with how the software will work.
It is not uncommon to hear programmers lament, "Users do not know what they want." They may not know how it should physically look or how it should best be delivered, but Users most definitely know what they want from an information point of view. Most programmers simply are not asking the right questions. Then again, they were not trained for this and are trying to compensate for the lack of true Analysts.
Remarkably, the Analyst function is experiencing a resurgence in the industry as companies are realizing that a higher level person is needed to understand the business and have a more global perspective of a company's systems and software. To illustrate, the process should fundamentally work like this:
1. Working with the User, the Analyst studies the business and helps the User specify information requirements.
2. From the requirements, the Analyst produces a system design which includes either a new system and/or modification of an existing system. As part of the design, the Analyst defines:
* The logical processing of data in terms of how it is to be collected, stored, and retrieved.
* The business processes affected, including the parts implemented by the computer.
* The design of the inputs and outputs.
* The design of the logical data base model.
In considering the computer processing, the Analyst determines which portions can be implemented by a commercial package or requires programming.
3. The design specifications are conveyed to the Programmer and the DBA for implementation.
4. From the logical data base model, the DBA designs a physical solution and produces the necessary Data Definition Language. The DBA passes on the physical file layouts to the Programmer for implementation.
5. The Programmer studies the software specifications and determines a suitable method of implementation, e.g., languages to be used, along with suitable tools and techniques for design.
For graphic, see:
http://www.phmainstreet.com/mba/blog/ss080225.jpg
The real beneficiary of such an approach is the programmer as the "guess work" has been eliminated for him. This may be an oversimplification of the overall process, but it is intended to show the vital role the Analyst plays and how it contrasts with the other participants. In the absence of such a person, the Programmer inevitably defaults to the role of Analyst and here is where specification problems begin to emerge.
This also hints at the limitations of "agile" methods. To their credit, the proponents of such methodologies recognize they are limited to software and, in particular, a single program. In doing so, they are trying to expedite the overall process of specification gathering in order to get to the job of programming.
In addition to defining the relationships between the various development functions, there is also the problem of developing a standard and consistent approach for recording specifications. This can be performed orally, but more likely it is recorded using a documentation technique to communicate the work to be performed and as a means to check the finished product to see if it does indeed satisfy the specifications. In the fields of engineering and construction, standards have been developed over the years to record specifications, such as blueprinting. But in the I.T. field, a myriad of techniques have been introduced with little or no standardization. For example, there are several different types of graphical and textural techniques, as well as repositories and data dictionaries to record and track specifications. Regardless, very few companies have adopted standards for recording specifications.
CONCLUSION
The problem with specifications in the design and development of systems and software is primarily due to a lack of standardization in the industry. There are a lack of standards in the areas of:
* Different types of deliverables resulting from the development process and how to format them (including specifications).
* Different development functions participating in the process, along with their interrelationships, and duties and responsibilities.
* Different perspectives of development in terms of the inherent properties of systems and software.
* Different methods, tools and techniques for performing design and development.
As long as there remains a lack of standardization in the I.T. industry, there will always remain a different interpretation of what specifications are and how to best document them. In other words, we'll go on saying "You like tomato and I like tomahto." So when do we call the whole thing off?
If you would like to discuss this with me in more depth, please do not hesitate to send me an e-mail at timb001@phmainstreet.com
"Good specifications will always improve programmer productivity far better than any programming tool or technique."
You like tomato and I like tomahto
Potato, potahto, Tomato, tomahto."
Let's call the whole thing off."
- Lyrics by Ira Gershwin; Music by George Gershwin
Defining specifications for the design and development of systems and software is a lot like this classic Gershwin song and what I personally regard as the biggest cause of confusion in the Information Technology field for as long as I can remember, which is over 30 years in the industry. Some people say specifications should be based on the inherent properties of information, others believe it is based on a screen/report or file layout, yet others adamantly believe it should be based on process and data specifications. Interestingly, all are absolutely correct. The difference lies in the perspective of the person and the work to be performed. For example, how we define specifications for the design of an automobile is certainly different than how we specify a skyscraper. The same is true in the I.T. field where we have different things to be produced by different people; for example:
1. THE PROGRAMMER (aka, Software Engineer) requires precise specifications in order to develop program code (source and object). This normally takes the form of processing requirements (e.g., hardware configuration, types of transactions to be processed, volume, timing, messages, etc.) and physical data requirements (input/output/file layouts).
2. DBA (Data Base Administrator) requires precise specifications in order to select a suitable file management technique (e.g., DBMS) and produce the necessary Data Definition Language (DDL) for it. This normally takes the form of a logical data base model representing relationships between data entities.
3. THE ANALYST (aka, Systems Analyst, Systems Engineer, Systems Architect, Business Analyst) - requires specifications about the end-User's information requirements in order to design a system solution. This is normally based on a definition of the user's business actions and/or decisions to be supported. Following the system design, the Analyst produces the specifications required by the Programmer and DBA to fulfill their part of the puzzle. From this perspective, the Analyst is the translator between the end-User and the Programmers and DBAs.
Each party has his own unique perspective of the puzzle and, as such, requires different "specifications." To compound the problem though, the role of the Analyst sharply diminished over the years, leaving it to the Programmers to try and determine what the end-User needs, a skill they are typically not trained or suited for. To illustrate, I am reminded of the story of the IT Director at a shoe manufacturing company who received a call from the corporate Sales Manager asking for some help on a pressing problem. The IT Director sent over one of his programmers to meet with the Sales Manager and discuss the problem. Basically, the manager wanted a printout of all shoe sales sorted by model, volume, type, color, etc. The programmer immediately knew how to access the necessary data and sorted it accordingly thereby producing a voluminous printout (three feet high) which he dutifully delivered to the user.
The IT Director stopped by the Sales Manager's office a few days later to inquire if the programmer had adequately serviced the user. The sales manager afforded the programmer accolades on his performance and proudly pointed at the impressively thick printout sitting on his desk. The IT Director then asked how the manager used the printout. He explained he took it home over the weekend, slowly sifted through the data, and built a report from it showing sales trends.
"Did you explain to the programmer you were going to do this?" asked the IT Director.
"No," replied the Sales Manager.
"Aren't you aware we could have produced your report for you and saved you a lot of time and effort?"
"No."
This is a classic example of the blind leading the blind. The user did not know how to adequately describe the business problem, and the programmer asked the wrong questions. Remarkably, both the Sales Manager and programmer were delighted with the results. The IT Director simply shook his head in disbelief.
There are substantial differences between specifying information requirements and specifying software. Both have their place, but both serve different purposes. Whereas a true Analyst investigates the underlying business rationale of the information, the Programmer lives in the physical world and is only concerned with how the software will work.
It is not uncommon to hear programmers lament, "Users do not know what they want." They may not know how it should physically look or how it should best be delivered, but Users most definitely know what they want from an information point of view. Most programmers simply are not asking the right questions. Then again, they were not trained for this and are trying to compensate for the lack of true Analysts.
Remarkably, the Analyst function is experiencing a resurgence in the industry as companies are realizing that a higher level person is needed to understand the business and have a more global perspective of a company's systems and software. To illustrate, the process should fundamentally work like this:
1. Working with the User, the Analyst studies the business and helps the User specify information requirements.
2. From the requirements, the Analyst produces a system design which includes either a new system and/or modification of an existing system. As part of the design, the Analyst defines:
* The logical processing of data in terms of how it is to be collected, stored, and retrieved.
* The business processes affected, including the parts implemented by the computer.
* The design of the inputs and outputs.
* The design of the logical data base model.
In considering the computer processing, the Analyst determines which portions can be implemented by a commercial package or requires programming.
3. The design specifications are conveyed to the Programmer and the DBA for implementation.
4. From the logical data base model, the DBA designs a physical solution and produces the necessary Data Definition Language. The DBA passes on the physical file layouts to the Programmer for implementation.
5. The Programmer studies the software specifications and determines a suitable method of implementation, e.g., languages to be used, along with suitable tools and techniques for design.
For graphic, see:
http://www.phmainstreet.com/mba/blog/ss080225.jpg
The real beneficiary of such an approach is the programmer as the "guess work" has been eliminated for him. This may be an oversimplification of the overall process, but it is intended to show the vital role the Analyst plays and how it contrasts with the other participants. In the absence of such a person, the Programmer inevitably defaults to the role of Analyst and here is where specification problems begin to emerge.
This also hints at the limitations of "agile" methods. To their credit, the proponents of such methodologies recognize they are limited to software and, in particular, a single program. In doing so, they are trying to expedite the overall process of specification gathering in order to get to the job of programming.
In addition to defining the relationships between the various development functions, there is also the problem of developing a standard and consistent approach for recording specifications. This can be performed orally, but more likely it is recorded using a documentation technique to communicate the work to be performed and as a means to check the finished product to see if it does indeed satisfy the specifications. In the fields of engineering and construction, standards have been developed over the years to record specifications, such as blueprinting. But in the I.T. field, a myriad of techniques have been introduced with little or no standardization. For example, there are several different types of graphical and textural techniques, as well as repositories and data dictionaries to record and track specifications. Regardless, very few companies have adopted standards for recording specifications.
CONCLUSION
The problem with specifications in the design and development of systems and software is primarily due to a lack of standardization in the industry. There are a lack of standards in the areas of:
* Different types of deliverables resulting from the development process and how to format them (including specifications).
* Different development functions participating in the process, along with their interrelationships, and duties and responsibilities.
* Different perspectives of development in terms of the inherent properties of systems and software.
* Different methods, tools and techniques for performing design and development.
As long as there remains a lack of standardization in the I.T. industry, there will always remain a different interpretation of what specifications are and how to best document them. In other words, we'll go on saying "You like tomato and I like tomahto." So when do we call the whole thing off?
If you would like to discuss this with me in more depth, please do not hesitate to send me an e-mail at timb001@phmainstreet.com
"Good specifications will always improve programmer productivity far better than any programming tool or technique."
SSH Tunneling In Your Application
Introduction
This article is dedicated to the task of securing MySQL client-server connection using functionality provided by the Secure Shell (SSH) protocol. To be exact, the SSH tunneling concept is utilized. We will review the steps needed to build secure MySQL client applications and implement a sample one ourselves.
MySQL traffic is not the only kind of data that can be tunneled by the Secure Shell. SSH can be used to secure any application-layer TCP-based protocol, such as HTTP, SMTP and POP3. If your application needs to secure such a protocol by tunneling it through a protected SSH connection, this article will be useful to you.
Background
Let's imagine that we are developing an enterprise application that needs to send requests to a number of SQL servers all over the world and get responses from them (let's imagine that it's a super-powerful bank system that stores information about millions of accounts).
All the data between the application and SQL servers are transferred via the Internet "as is". As most protocols used by SQL servers do not provide data integrity and confidentiality (and those that do, do it in a quite nontransparent way), all the transferred requests and responses may (and be sure, they will!) become visible to a passive adversary. An active adversary can cause much more serious problems - he can alter the data and no one will detect it.
SSH (Secure Shell) is a protocol that may help in solving this problem. One of its outstanding features is its ability to tunnel different types of connections through a single, confident and integrity-protected connection.
Now you do not have to worry about securing the data transferred over the Internet - SSH will handle this for you. In particular, SSH will take care of the following security aspects:
Strong data encryption according to the latest industry-standard algorithms (AES, Twofish)
Authentication of both client and server computers
Data integrity protection
Stability with regard to different kinds of network attacks
Compression of the data being tunneled
Complete independence of the operating system and network specifics
Tunneling (or forwarding) works in the following way:
SSH client opens a listening port on some local network interface and tells the SSH server that he wishes to forward all connections accepted on this port to some remote host.
When another connection is accepted on the listening port, the SSH client informs the SSH server about this fact and they together establish a logical tunnel for it. At the same time, the SSH server establishes a new TCP connection to the remote host agreed upon in step 1.
The SSH client encrypts all the data it receives from the accepted connection and sends it to the SSH server. The SSH server decrypts the data received from the SSH client and sends it to the remote host.
Please note, that the SSH client acts as a TCP server for the connections it accepts, and the SSH server acts as a TCP client for the connections it establishes to the remote host.
A single SSH connection can tunnel as many application layer connections as needed. This means that you can defend your server by moving all the listening ports (e.g., database and application server ports) to a local network, leaving only the SSH port open. It is much easier to take care of a single port, rather than a dozen different listening ports.
Into the Fire
Let's develop a small application that illustrates the use of SSH forwarding capabilities. We will consider an important task of securing a connection between a MySQL client application and a MySQL server. Imagine that we need to get information from the database server, which is located a thousand miles away from us, in a secure way.
SecureMySQLClient is the application we are planning to implement. It includes the following modules:
SSH client-side module with forwarding capabilities
MySQL client-side module
User interface for configuring application settings and displaying query results.
The SSH server runs in a remote network and is visible from the Internet. The database (MySQL) server runs in the same network as the SSH server and may not be visible from the Internet.
The process of performing secure data exchange between SecureMySQLClient and the Database server goes as follows:
The SSH client module negotiates a secure connection to the SSH server and establishes forwarding from some local port to the remote MySQL server.
The MySQL client module connects to the listening port opened by the SSH client module.
The SSH client and server set up a logical tunnel for the accepted connection.
The MySQL client sends SELECT to the port opened by the SSH client module, which encrypts it and sends it to the SSH server. The SSH server decrypts the request and sends it to the MySQL server.
The SSH server receives a response from the MySQL server, encrypts it and sends it back to the SSH client, which decrypts it and passes it to the MySQL client module.
Looks too complex? Implementing this is easier than you think.So, let's go and do it.
We will need the following products installed on the computer before creating the application:
Microsoft Visual Studio .NET 2003, 2005 or 2008.
EldoS SecureBlackbox (.NET edition). Can be downloaded from
http://www.eldos.com/sbbdev/download.php.
MySQL .NET Connector. Can be downloaded from
http://www.mysql.com/products/connector/net/.
Let's now open Microsoft Visual Studio .NET (we will use the 2005 version) and try to build such an application from scratch.
After the GUI design has been finished, we can go on with the business logic code itself. First, adding references to the following assemblies to our project:
SecureBlackbox
SecureBlackbox.PKI (only in SecureBlackbox 5. SecureBlackbox 6 doesn't have this assembly)
SecureBlackbox.SSHClient
SecureBlackbox.SSHCommon
MySql.Data
SSHForwarding notifies us about certain situations via its events, so we need to create handlers for some of them:
OnAuthenticationSuccess - Is fired when the client authentication process has been completed.
OnAuthenticationFailed - Is fired if the client was unable to authenticate using particular authentication method. In general, this does not mean that the authentication process completely failed – the client may try several authentication methods consequently and one of them may succeed.
OnError - Is fired if some protocol error occurs during the session. Usually this leads to a connection closure. The exact error can be detected via the error code passed to it.
OnKeyValidate - Is used to pass the received server key to the application. Please note that incorrect handling of this event may result in a serious security breach. The handler of this event should verify that the passed key corresponds to the remote server (and warn the user if it does not). If the key is valid, the handler should set the Validate parameter to true. The sample does not perform key checkup for the sake of simplicity.
OnOpen - Is fired when the SSH connection is established and the component is ready to tunnel data. We will use the handler of this event to kick the MySQL client component.
OnClose - Is fired when the SSH connection is closed.
OnConnectionOpen - Is fired when a new tunnel is created. The corresponding tunneled connection object is passed as parameter.
OnConnectionClose - Is fired when an existing tunnel is closed.
Implementing two core methods, SetupSSHConnection() and RunQuery(). The first one initializes the SSHForwarding object and establishes an SSH session to the remote server by calling its Open() method, and the second one sends the query to the MySQL server.
The code of the SetupSSHConnection() method is pretty simple:
private void SetupSSHConnection()
{
// Specifying address and port of SSH server
Forwarding.Address = tbSSHAddress.Text;
Forwarding.Port = Convert.ToInt32(tbSSHPort.Text);
// Setting credentials for authentication on SSH server
Forwarding.Username = tbUsername.Text;
Forwarding.Password = tbPassword.Text;
// Specifying network interface and port number to be opened locally
Forwarding.ForwardedHost = "";
Forwarding.ForwardedPort = Convert.ToInt32(tbFwdPort.Text);
// Specifying destination host where the server should forward the data to.
// Please note, that the destination should be specified according to
// SSH servers point of view. E.g., 127.0.0.1 will stand for
// SSH servers localhost, not SSH clients one.
Forwarding.DestHost = tbDBAddress.Text;
Forwarding.DestPort = Convert.ToInt32(tbDBPort.Text);
// Opening SSH connection
Forwarding.Open();
}
A bit more complex is the code of the RunQuery() method (to be exact, the code of RunQueryThreadFunc() method, which is invoked in a separate thread by the RunQuery() method):
private void RunQueryThreadFunc()
{
MySqlConnection MySQLConnection = new MySqlConnection();
// forming connection string
string connString = "database=" + tbDBName.Text + ";Connect Timeout=30;user id=" + tbDBUsername.Text + "; pwd=" + tbDBPassword.Text + ";";
if (cbUseTunnelling.Checked)
{
// specifying local destination if forwarding is enabled
connString = connString + "server=127.0.0.1; port=" + tbFwdPort.Text;
}
else
{
// specifying real MySQL server location if forwarding is not used
connString = connString + "server=" + tbDBAddress.Text + "; port=" + tbDBPort.Text;
}
MySQLConnection.ConnectionString = connString;
try
{
// opening MySQL connection
MySqlCommand cmd = new MySqlCommand(tbQuery.Text, MySQLConnection);
Log("Connecting to MySQL server...");
MySQLConnection.Open();
Log("Connection to MySQL server established. Version: " + MySQLConnection.ServerVersion + ".");
// reading query results
MySqlDataReader reader = cmd.ExecuteReader();
try
{
for (int i = 0; i < reader.FieldCount; i++)
{
AddQueryColumn(reader.GetName(i));
}
while (reader.Read())
{
string[] values = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
values[i] = reader.GetString(i);
}
AddQueryValues(values);
}
}
finally
{
// closing both MySQL and SSH connections
Log("Closing MySQL connection");
reader.Close();
MySQLConnection.Close();
Forwarding.Close();
}
}
catch (Exception ex)
{
Log("MySQL connection failed (" + ex.Message + ")");
}
}
And, that's all But there is one more thing I need to draw your attention to. As both SSH and MySQL protocols run in separate threads and access GUI controls from those threads, we need to handle the GUI access in a special way to prevent a cross-thread problems. I will illustrate this with the example of the Log() method:
delegate void LogFunc(string S);
private void Log(string S)
{
if (lvLog.InvokeRequired)
{
LogFunc d = new LogFunc(Log);
Invoke(d, new object[] { S });
}
else
{
ListViewItem item = new ListViewItem();
item.Text = DateTime.Now.ToShortTimeString();
item.SubItems.Add(S);
lvLog.Items.Add(item);
}
}
Finally, the application is finished, and we may try it in work. So clicking F5 and specifying the following settings in the text fields of the application form:
SSH server location, username and password used to authenticate to it.
Database server address, port, username, password, database name and query. Remember that database server address should be specified as it is visible from the SSH server.
Turning on the "Use tunneling" checkbox.
Now click the Start button and wait for the query results. If all the parameters have been specified correctly, we should get something like this:
Features and requirements
SSH protocol provides (and SecureBlackbox implements) the following features:
Strong data encryption using AES, Twofish, Triple DES, Serpent and many other symmetric algorithms with key lengths up to 256 bits
Client authentication using one or multiple authentication types (password-based, public key-based, X.509 certificate-based, interactive challenge-response authentication)
Server authentication
Strong key exchange based on DH or RSA public key algorithms
Data integrity protection
Compression of tunneled data
Multiplexing several tunneled connections through a single SSH connection
SecureBlackbox provides the following functionality as well:
Comprehensive standards-compliant implementation of the SSH protocol (both client and server sides)
Support for cryptographic tokens as storage for keys and certificates
Windows system certificate stores support
Professional and fast customer support
SecureBlackbox is available in .NET, VCL and ActiveX editions. This means that you can use the components in projects implemented in C#, VB.NET, Object Pascal (Delphi and Kylix), FreePascal, VB6 and C++ languages.
SecureBlackbox (.NET edition) is available for Microsoft .NET Framework 1.1, 2.0, 3.0 and 3.5, and .NET Compact Framework.
This article is dedicated to the task of securing MySQL client-server connection using functionality provided by the Secure Shell (SSH) protocol. To be exact, the SSH tunneling concept is utilized. We will review the steps needed to build secure MySQL client applications and implement a sample one ourselves.
MySQL traffic is not the only kind of data that can be tunneled by the Secure Shell. SSH can be used to secure any application-layer TCP-based protocol, such as HTTP, SMTP and POP3. If your application needs to secure such a protocol by tunneling it through a protected SSH connection, this article will be useful to you.
Background
Let's imagine that we are developing an enterprise application that needs to send requests to a number of SQL servers all over the world and get responses from them (let's imagine that it's a super-powerful bank system that stores information about millions of accounts).
All the data between the application and SQL servers are transferred via the Internet "as is". As most protocols used by SQL servers do not provide data integrity and confidentiality (and those that do, do it in a quite nontransparent way), all the transferred requests and responses may (and be sure, they will!) become visible to a passive adversary. An active adversary can cause much more serious problems - he can alter the data and no one will detect it.
SSH (Secure Shell) is a protocol that may help in solving this problem. One of its outstanding features is its ability to tunnel different types of connections through a single, confident and integrity-protected connection.
Now you do not have to worry about securing the data transferred over the Internet - SSH will handle this for you. In particular, SSH will take care of the following security aspects:
Strong data encryption according to the latest industry-standard algorithms (AES, Twofish)
Authentication of both client and server computers
Data integrity protection
Stability with regard to different kinds of network attacks
Compression of the data being tunneled
Complete independence of the operating system and network specifics
Tunneling (or forwarding) works in the following way:
SSH client opens a listening port on some local network interface and tells the SSH server that he wishes to forward all connections accepted on this port to some remote host.
When another connection is accepted on the listening port, the SSH client informs the SSH server about this fact and they together establish a logical tunnel for it. At the same time, the SSH server establishes a new TCP connection to the remote host agreed upon in step 1.
The SSH client encrypts all the data it receives from the accepted connection and sends it to the SSH server. The SSH server decrypts the data received from the SSH client and sends it to the remote host.
Please note, that the SSH client acts as a TCP server for the connections it accepts, and the SSH server acts as a TCP client for the connections it establishes to the remote host.
A single SSH connection can tunnel as many application layer connections as needed. This means that you can defend your server by moving all the listening ports (e.g., database and application server ports) to a local network, leaving only the SSH port open. It is much easier to take care of a single port, rather than a dozen different listening ports.
Into the Fire
Let's develop a small application that illustrates the use of SSH forwarding capabilities. We will consider an important task of securing a connection between a MySQL client application and a MySQL server. Imagine that we need to get information from the database server, which is located a thousand miles away from us, in a secure way.
SecureMySQLClient is the application we are planning to implement. It includes the following modules:
SSH client-side module with forwarding capabilities
MySQL client-side module
User interface for configuring application settings and displaying query results.
The SSH server runs in a remote network and is visible from the Internet. The database (MySQL) server runs in the same network as the SSH server and may not be visible from the Internet.
The process of performing secure data exchange between SecureMySQLClient and the Database server goes as follows:
The SSH client module negotiates a secure connection to the SSH server and establishes forwarding from some local port to the remote MySQL server.
The MySQL client module connects to the listening port opened by the SSH client module.
The SSH client and server set up a logical tunnel for the accepted connection.
The MySQL client sends SELECT to the port opened by the SSH client module, which encrypts it and sends it to the SSH server. The SSH server decrypts the request and sends it to the MySQL server.
The SSH server receives a response from the MySQL server, encrypts it and sends it back to the SSH client, which decrypts it and passes it to the MySQL client module.
Looks too complex? Implementing this is easier than you think.So, let's go and do it.
We will need the following products installed on the computer before creating the application:
Microsoft Visual Studio .NET 2003, 2005 or 2008.
EldoS SecureBlackbox (.NET edition). Can be downloaded from
http://www.eldos.com/sbbdev/download.php.
MySQL .NET Connector. Can be downloaded from
http://www.mysql.com/products/connector/net/.
Let's now open Microsoft Visual Studio .NET (we will use the 2005 version) and try to build such an application from scratch.
After the GUI design has been finished, we can go on with the business logic code itself. First, adding references to the following assemblies to our project:
SecureBlackbox
SecureBlackbox.PKI (only in SecureBlackbox 5. SecureBlackbox 6 doesn't have this assembly)
SecureBlackbox.SSHClient
SecureBlackbox.SSHCommon
MySql.Data
SSHForwarding notifies us about certain situations via its events, so we need to create handlers for some of them:
OnAuthenticationSuccess - Is fired when the client authentication process has been completed.
OnAuthenticationFailed - Is fired if the client was unable to authenticate using particular authentication method. In general, this does not mean that the authentication process completely failed – the client may try several authentication methods consequently and one of them may succeed.
OnError - Is fired if some protocol error occurs during the session. Usually this leads to a connection closure. The exact error can be detected via the error code passed to it.
OnKeyValidate - Is used to pass the received server key to the application. Please note that incorrect handling of this event may result in a serious security breach. The handler of this event should verify that the passed key corresponds to the remote server (and warn the user if it does not). If the key is valid, the handler should set the Validate parameter to true. The sample does not perform key checkup for the sake of simplicity.
OnOpen - Is fired when the SSH connection is established and the component is ready to tunnel data. We will use the handler of this event to kick the MySQL client component.
OnClose - Is fired when the SSH connection is closed.
OnConnectionOpen - Is fired when a new tunnel is created. The corresponding tunneled connection object is passed as parameter.
OnConnectionClose - Is fired when an existing tunnel is closed.
Implementing two core methods, SetupSSHConnection() and RunQuery(). The first one initializes the SSHForwarding object and establishes an SSH session to the remote server by calling its Open() method, and the second one sends the query to the MySQL server.
The code of the SetupSSHConnection() method is pretty simple:
private void SetupSSHConnection()
{
// Specifying address and port of SSH server
Forwarding.Address = tbSSHAddress.Text;
Forwarding.Port = Convert.ToInt32(tbSSHPort.Text);
// Setting credentials for authentication on SSH server
Forwarding.Username = tbUsername.Text;
Forwarding.Password = tbPassword.Text;
// Specifying network interface and port number to be opened locally
Forwarding.ForwardedHost = "";
Forwarding.ForwardedPort = Convert.ToInt32(tbFwdPort.Text);
// Specifying destination host where the server should forward the data to.
// Please note, that the destination should be specified according to
// SSH servers point of view. E.g., 127.0.0.1 will stand for
// SSH servers localhost, not SSH clients one.
Forwarding.DestHost = tbDBAddress.Text;
Forwarding.DestPort = Convert.ToInt32(tbDBPort.Text);
// Opening SSH connection
Forwarding.Open();
}
A bit more complex is the code of the RunQuery() method (to be exact, the code of RunQueryThreadFunc() method, which is invoked in a separate thread by the RunQuery() method):
private void RunQueryThreadFunc()
{
MySqlConnection MySQLConnection = new MySqlConnection();
// forming connection string
string connString = "database=" + tbDBName.Text + ";Connect Timeout=30;user id=" + tbDBUsername.Text + "; pwd=" + tbDBPassword.Text + ";";
if (cbUseTunnelling.Checked)
{
// specifying local destination if forwarding is enabled
connString = connString + "server=127.0.0.1; port=" + tbFwdPort.Text;
}
else
{
// specifying real MySQL server location if forwarding is not used
connString = connString + "server=" + tbDBAddress.Text + "; port=" + tbDBPort.Text;
}
MySQLConnection.ConnectionString = connString;
try
{
// opening MySQL connection
MySqlCommand cmd = new MySqlCommand(tbQuery.Text, MySQLConnection);
Log("Connecting to MySQL server...");
MySQLConnection.Open();
Log("Connection to MySQL server established. Version: " + MySQLConnection.ServerVersion + ".");
// reading query results
MySqlDataReader reader = cmd.ExecuteReader();
try
{
for (int i = 0; i < reader.FieldCount; i++)
{
AddQueryColumn(reader.GetName(i));
}
while (reader.Read())
{
string[] values = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
values[i] = reader.GetString(i);
}
AddQueryValues(values);
}
}
finally
{
// closing both MySQL and SSH connections
Log("Closing MySQL connection");
reader.Close();
MySQLConnection.Close();
Forwarding.Close();
}
}
catch (Exception ex)
{
Log("MySQL connection failed (" + ex.Message + ")");
}
}
And, that's all But there is one more thing I need to draw your attention to. As both SSH and MySQL protocols run in separate threads and access GUI controls from those threads, we need to handle the GUI access in a special way to prevent a cross-thread problems. I will illustrate this with the example of the Log() method:
delegate void LogFunc(string S);
private void Log(string S)
{
if (lvLog.InvokeRequired)
{
LogFunc d = new LogFunc(Log);
Invoke(d, new object[] { S });
}
else
{
ListViewItem item = new ListViewItem();
item.Text = DateTime.Now.ToShortTimeString();
item.SubItems.Add(S);
lvLog.Items.Add(item);
}
}
Finally, the application is finished, and we may try it in work. So clicking F5 and specifying the following settings in the text fields of the application form:
SSH server location, username and password used to authenticate to it.
Database server address, port, username, password, database name and query. Remember that database server address should be specified as it is visible from the SSH server.
Turning on the "Use tunneling" checkbox.
Now click the Start button and wait for the query results. If all the parameters have been specified correctly, we should get something like this:
Features and requirements
SSH protocol provides (and SecureBlackbox implements) the following features:
Strong data encryption using AES, Twofish, Triple DES, Serpent and many other symmetric algorithms with key lengths up to 256 bits
Client authentication using one or multiple authentication types (password-based, public key-based, X.509 certificate-based, interactive challenge-response authentication)
Server authentication
Strong key exchange based on DH or RSA public key algorithms
Data integrity protection
Compression of tunneled data
Multiplexing several tunneled connections through a single SSH connection
SecureBlackbox provides the following functionality as well:
Comprehensive standards-compliant implementation of the SSH protocol (both client and server sides)
Support for cryptographic tokens as storage for keys and certificates
Windows system certificate stores support
Professional and fast customer support
SecureBlackbox is available in .NET, VCL and ActiveX editions. This means that you can use the components in projects implemented in C#, VB.NET, Object Pascal (Delphi and Kylix), FreePascal, VB6 and C++ languages.
SecureBlackbox (.NET edition) is available for Microsoft .NET Framework 1.1, 2.0, 3.0 and 3.5, and .NET Compact Framework.
Subscribe to:
Posts (Atom)