Sarbanes-Oxley: Protecting your Clients’
By JJ Kuhl and Allen Monroe
The major provisions of Sarbanes-Oxley (originally named
the Investor Protection Act) require public companies to disclose
more financial information than in the past. The Act also
holds directors and officers personally-accountable for accuracy
of the information, by requiring these officers to certify
the accuracy of the internal controls used in the reporting
process. Traditionally, this has been satisfied through audits
of the systems and procedures that produce financial reports.
Sarbanes-Oxley extends this accountability to the accuracy
of the underlying transactional data from which these financial
reports are derived. California and other states also have
added database statutes aimed at protecting the consumer.
These legal developments provide an added incentive for
organizations to re-examine their Information Technology security
and business continuity procedures. Some companies are evaluating
new technologies and methods that represent significant improvements
over current practices. One important trend is the outsourcing
of security audits and testing of security practices and business
resumption plans by outside specialists. IT staff of medium-size
companies typically are hard-pressed to maintain existing
systems, and often don’t allocate sufficient time to
testing for holes in system security or conducting “fire
drills” for restoring key systems from backup media.
IT Risk Evaluations often find major shortcomings in the implementation
and documentation of system setups, access control settings,
and documentation of overall system topology. Such shortcomings
can result in non-compliance with recent legislation.
It has always been “best practices” to ensure
the transactional integrity and safety of corporate information.
Now, under Sarbanes-Oxley, it becomes a regulatory requirement.
Also, with the continuing trend of greater percentages of
corporate assets being comprised of intangible equity assets,
knowledge capital, and information, the protection of shareholder
equity means that information assets must be protected to
protect the shareholder.
The velocity of business has increased dramatically in the
past decade, forcing companies to extend their customer interface
via the Internet and to provide real-time information access.
These steps have been necessary for companies to compete effectively
in their markets and to reduce service costs. Such changes
have brought information technology onto center stage for
the support of key business processes. They also have created
significant new risk exposures which threaten the ability
of businesses to survive.
According to industry analysis, the leading causes of data
- 3% due to natural disaster,
- 7% due to computer viruses,
- 14% due to software corruption or program malfunction,
- 32% due to human error, and
- 44% due to hardware or system malfunction.
Recent studies have shown that the majority of businesses
who lose data permanently during a catastrophic event never
reopen. For example, following system outages caused by the
1993 World Trade Center bombing, 143 companies disappeared
within a few years of the event. In that event, not just the
data was lost. So were the systems and infrastructure that
support the applications and data. And, worst of all, the
people who knew how to restore the data were lost.
In recent times, information business continuity plans have
focused on maintaining off-site repositories of data, together
with systems documentation, so that vital applications can
be restored in days or weeks following a disaster. However,
with recognition of the increased velocity of business and
the possible unavailability of people needed to reconstruct
failed systems, businesses increasingly are examining new
ways of ensuring that vital systems and data continue uninterrupted.
Technologies that previously were affordable by the largest
businesses now are becoming part of the Information Technology
survivability plans of medium-size organizations. Two such
methods which have become cost-effective are known as “load-balancing”
and “failover.” Both technologies utilize redundant
servers located in geographically-disparate locations. In
the event of failure of one server, the processing load is
switched to run on the other servers. So, in the event of
disaster at one location, key applications and data continue
to be accessible from one or more other locations.
Such approaches also facilitate the geographic dispersion
of archived data. The “system state” also can
be archived, so that if a virus attack or software upgrade
disables a key server, the capability exists to restore systems
in the pre-failure state.
Outsourcing of the job of setting up and maintaining the redundant
servers is increasingly feasible, as specialists in system
redundancy provide such services from high-bandwidth Class
A data centers, where the capabilities of tens of millions
of dollars of infrastructure can be rented for only a couple
thousand dollars monthly. The outside people responsible for
maintaining the systems also protects against deliberate corruption
or sabotage of key data, one of the focal points of Sarbanes-Oxley
Allen Monroe, Founder and CEO, RFactor. Specialists in
enterprise risk management and outsourced system redundancy.
J. J. Kuhl, Systems survivability consultant.