Wednesday, September 12, 2007

Merrill Lynch IT Systems - 1

1. Merrill Lynch Replacing Global Order Entry Systems
Brokerage consolidates cross-border trading systems; expects three-year ROI

March 11, 2002 (Computerworld)



Solving the business value equation: balancing business demand and IT supply

From Ad Hoc to Optimized: Automating IT and Development Processes for Agility and Efficiency
Metrics Matter - Five Essential IT Metrics For Success


Sign up to receive Security Resource Alerts



March 11, 2002 (Computerworld) -- New York
2. Merrill Lynch & Co. is replacing its order entry software around the world as part of a multimillion-dollar project aimed at eliminating a hodgepodge of in-house and acquired systems.
Marvin Balliet, the chief financial officer for Merrill Lynch's technology group, said he expects the project - the cost of which he ballparked at "tens of millions of dollars" - to generate a return on investment within three years through cost reductions and additional sales.
Still, he said, the actual ROI will be difficult to measure because "part of why you do it is cost reduction, part of why you do it is system enhancement, and part of why you do it is revenue retention."
"Today, my revenue may be down in a country where I put it in, but that has nothing to do with the fact that I now have a new platform," said Balliet.

An Industry Trend

Centralizing systems has become a trend among financial services companies as the Internet and globalization - combined with the industry's push to do straight-through processing - are forcing brokerages and their clients to communicate across borders in near-real-time, according to analysts.
"It was an interest of companies before. Now, it's a real driver," said Shaw Lively, an analyst at IDC in Framingham, Mass. "It's not necessarily always an upgrade but more about trying to get common systems, common platforms, common interfaces," he said.
One financial services firm that's part of the global systems consolidation trend is New York-based Citibank, which is currently replacing a decades-old set of back-office corporate banking systems with a single platform in all of its overseas corporate offices [News, Feb. 4].
Balliet said Merrill Lynch is initially focusing on its equity cash businesses in overseas operations because those systems offer the greatest opportunity for increased efficiency. He said the new platform will establish a global network of trading desks that can execute orders from regional exchanges instead of local connections. Merrill Lynch's current vendor-built order entry systems require middleware interfaces to allow far-flung offices to communicate with one another.
Balliet said Merrill Lynch is still evaluating what it will do with its U.S. order entry system "because we have a reasonably robust system from one of our acquisitions in the U.S., and the question is, do we have one global system or two?"
Piecemeal Systems
Overseas, however, Merrill's international trade order entry systems have been put together piecemeal, with some added through acquisitions of other firms, according Ballier.
In May, Merrill Lynch partnered with Royalblue Financial PLC, the London-based maker of the global financial trading software that supports Merrill Lynch's global trading requirements in the European, Japanese, Asia-Pacific and U.S. markets. The deal included Royalblue's Fidessa trading platform and consultancy services. Royalblue's software supports order management, trade management and market execution across some 20 markets, according to Royalblue officials.
The Royalblue order management system rollout has been completed at Merrill Lynch offices in Japan and the rest of Asia; the system is now being introduced at the firm's European offices.

If Merrill Lynch decides not to replace its U.S.-based order entry systems, the project is expected to be completed within the next year, according to Balliet.
Big global systems replacement efforts like Merrill Lynch's typically cost tens of millions of dollars to complete, but companies may find that such projects are worthwhile because "biting the bullet" now helps avoid the cost of developing interfaces in the future, said Larry Tabb, an analyst at Needham, Mass.-based TowerGroup.

"In a period of business earnings pressure, it can force firms to look at how to reduce overall cost and centralize technology, reduce redundancy and streamline their processing," said Tabb.

http://www.computerworld.com/managementtopics/roi/story/0,10801,68966,00.html

-----------------------------------------



--------------------------------------------------------------


3. Unleashing the Power of Data

April/2004

Like most corporations, Merrill Lynch has made tremendous investments to build and maintain its IT infrastructure.

The infrastructure comprises multiple hardware platforms (including mainframes, Unix systems, and Intel-based machines) and a plethora of business applications, both developed in house and purchased from vendors. And the environment is in a constant state of flux as the technology and business landscapes continue to change.

In order to reduce costs and improve operational efficiencies, Merrill Lynch realized it needed a comprehensive picture of the environment to:

Improve productivity by making it easier for employees to find what they need
Eliminate duplicated efforts in different business units
Prevent poor decisions based on faulty information.

The information to create this comprehensive picture of the IT environment existed, but was stored in heterogeneous and fragmented data silos that were inconsistently sourced and refreshed. A simple question such as "How many servers are running Windows NT?" would generate multiple unsupported answers. The data was also stored in various formats, from relational databases to Word documents to the ever-popular and pervasive spreadsheet. The varied formats further limit the usefulness of the data: The relational data couldn't be joined with unstructured data.

The team charged with creating this comprehensive IT picture found an established infrastructure and methodology in Merrill Lynch's existing data warehouse practice. Building on that foundation, the team developed an IT data warehouse that resulted in significant improvements in operational efficiencies, millions of dollars saved by cost avoidance and cost reduction, and an unexpected benefit — support for Six Sigma and Sarbanes-Oxley Act (SOX) compliance efforts.

The Data Warehouse Practice

The Merrill Lynch data warehouse practice is a disciplined approach to combining a collection of products and services with a well-defined architecture to facilitate the acquisition, transformation, loading, and delivery of data. The practice was developed in 1999, during the creation of a data warehouse to serve Merrill Lynch's retail division, the Global Private Client group. Midas, which served as the model for the IT data warehouse implemented in 2002, was designed to position Merrill Lynch's marketing team to react quickly to changing market conditions, track client profitability, facilitate and track marketing campaigns, and so on.

The architectural framework at the core of the data warehouse practice is composed of the following six layers:

Sourcing. The data-sourcing layer handles data acquisition, refinement, and aggregation. This layer also identifies data sources, selects the correct data elements, and refines, filters, and summarizes data elements to produce output suitable for loading into the data layer. The sourcing layer is typically where extract, transform, and load (ETL) tools are used. All data in this layer is stored in a central staging area where it awaits its specific processing.

Data. This layer is comprised of the hardware and software to host the data store. The IT data warehouse uses DB2 UDB for AIX v.8.1, and an OS/390 warehouse uses DB2 for OS/390 v.7. Scalability and flexibility are crucial, because the architecture must support growth without significant reinvestment in resources. The Midas data warehouse, which uses DB2 UDB for AIX Enterprise-Extended Edition v.7.2, uses a massively parallel processing (MPP) architecture that has allowed Merrill Lynch to expand from a 4-node to a 14-node cluster without redesigning the application.

Metadata. A robust metadata layer empowers clients to serve themselves. The physical structures of each table and associated columns are fully documented and available to the user community via Web front ends. The self-service model allows internal clients to explore the available data, helping in query construction.

Access. This layer is responsible for providing internal clients with the view of the data. No one tool fits all users' needs; therefore, this layer uses a range of products to address different levels of retrieval requirements. A Web front end handles basic reporting requirements by providing reports that many users require. Hummingbird's BI Web provides ad hoc reporting capabilities and an easy-to-use entity/relationship query-building environment. Cognos PowerPlay, which uses multidimensional data structures called cubes, handles what-if analysis. Cubes let users view different aspects of the data related to a measure, such as job counts by time. Power users can access relational data directly via any ODBC-compliant product, including Microsoft Access or Excel.

Data visualization (heatmaps). Data visualization techniques provide higher-level analysis. For example, heatmaps, popularized by SmartMoney.com's Map of the Market (www.smartmoney.com/maps), highlight Merrill Lynch's virus readiness by site, organization, and number of servers for upper management. The heatmap interface displays multiple dimensions in a two-dimensional format by using shades of colors — red (bad) to green (good) — and shapes (whose size indicates the number of servers in a cell) to highlight critical information. Although this interface is traditionally used to display financial information (changes in stock prices), we adapted it to technology data and found it an excellent vehicle for focusing attention on the critical information clients need to make decisions.

Formatting and delivery. This layer is responsible for formatting and delivering to internal clients the right information at the right time by the right method. The data can be delivered using comma separated files, XML, and HTML. Delivery mechanisms include email, FTP, and HTTP. An in-house developed query scheduler lets users create, schedule, and publish the results of SQL queries that are executed in the data warehouse. The results of these queries can be delivered via email or FTP, based on the user's preferences. This application transformed the information delivery layer from the typical 9 to 5 reporting system into a 2437 system.

IT Data Warehouse Objectives
The core objectives in creating the IT data warehouse were to drive down costs while improving the effectiveness and efficiencies of the IT organization. An IT data warehouse would allow Merrill Lynch to improve its forecasting capabilities, aid in the decommissioning of applications, and identify and support offshore projects.

It also would develop into the environment that would assist in efficiency and governance programs such as Six Sigma and SOX.

Six Sigma is a disciplined approach to quality control that uses statistical analysis to measure and improve operational efficiencies by eliminating or reducing defects in processes. All Six Sigma projects follow the define, measure, analyze, improve, and control methodology.

The Sarbanes-Oxley Act of 2002, signed into law in the aftermath of Enron and other corporate scandals, focuses on restoring investor confidence by enforcing financial transparency and accountability for publicly traded companies. To comply with the law, many companies are improving their internal controls and processes; however, few complete third-party solutions have emerged to support SOX efforts. Most companies are complying with SOX by piecing together diverse processes and disparate data into SOX compliance reports. This process is costly and manually intensive.

A byproduct of SOX is that companies are reviewing all their internal processes, including IT policies and controls. SOX requires companies to have formal, frequently audited policies covering information security, change management, problem management, software management, and disaster recovery practices.

The IT data warehouse wasn't originally designed to support SOX, but it was designed to serve the IT informational needs of a diverse user base. The underlying architecture and data models were extended to support SOX by providing information to support both internal reviews and external audits. For example, by collecting and storing software patch levels, Merrill Lynch is able to assess the potential risk of virus attacks. A poorly implemented information security policy can lead to disastrous results, which could affect the firm's ability to conduct business. The major business disruptions caused by viruses such as Melissa, SQL Slammer, and Mydoom demonstrate the potential effects.

Using the IT data warehouse, auditors can easily obtain the information needed to establish a control objective, determine the firm's activity in complying with this objective, and automate the delivery of this information using the various information delivery techniques the data warehouse provides.

In short, auditors can get the right information, in the right way, at the right time, to answer the necessary compliance questions. This self-service model means less reliance on the IT staff to acquire and produce audit information. And, the auditors can access unbiased data directly, ensuring its credibility.

Building the Warehouse
Two major IT information sources serve the Merrill Lynch IT environment: the mainframe and the distributed platforms. Although information from both sources is required for the full view, the team decided to tackle the highly centralized mainframe environment first, developing the OS/390 data warehouse. The Information Technology Data Warehouse (ITDW), which I'll discuss in more detail, was created in a later phase.

The team reduced development costs by leveraging the existing data warehouse practice. The well-defined architecture served as the blueprint for this project and allowed us to implement a data subject area in less than 90 days. This initial success, and the subsequent additions of other subject areas, allowed the team to broaden the content in quickly delivered, manageable chunks.

Another key decision was to systematically collect as much information as possible from each of the major IT sources, mainframe and distributed. Manual input was kept to a bare minimum, improving data quality and eliminating a plethora of manual data collection procedures and tools that varied by technology unit. In addition to creating cleaner, more reliable data, this decision led to a reduction in the number of tools and procedures used throughout the firm, leading to cost savings and efficiencies. Technology units can now focus on how the data can be used to solve problems rather than how to collect it.

The core information required from the mainframe environment is contained in the Job Control Language (JCL) used to execute batch jobs. A tremendous amount of critical information is contained in these jobs. Unfortunately, the storage format makes it difficult to extract information from a JCL job. Simple questions such as "How many jobs are using a particular program?" or "Which job uses a third-party product?" are difficult to answer. Often, a senior technical analyst would research these questions by running numerous scans against information stored in partitioned dataset (PDS) libraries.

Another barrier to extracting this information is that a job normally calls procedures (PROCs). A PROC contains substitution variables that must be expanded to view the actual production instructions that will be executed by the operating system. This expansion occurs at execution time, but the JCL is stored in the PDS libraries without the substitutions being materialized.

Parsing a JCL job into its lowest components (such as job name, programs, step names, datasets) is a difficult task. JCL isn't a positional language; slight variations in format will cause inaccurate results. Merrill Lynch used parsers developed in house, but they weren't maintained consistently and were limited in their scope. To solve these problems, the team chose Blue Phoenix Corp.'s C-Discovery product to parse the mainframe environment. This environment was composed of more than 130,000 jobs, 80,000 programs, and 3 million lines of JCL code. C-Discovery decomposed jobs, procedures, and programs to their lowest forms and stored the results in flat files, which were then loaded into the database on a nightly basis.

The JCL job name is the core of the OS/390 data warehouse data model. All subject areas are linked together in the data model using the job name as their key. Other subject areas in the data model are job scheduling (OPCA), change management, job responsibility, problem tickets (BMC Remedy), change requests (Computer Associates Endeavor), database, and DASD and tape usage. The data store is a DB2 for z/OS database.

The OS/390 portion of the IT data warehouse project was instantiated into Merrill Lynch's day-to-day operations. The high-level views of the aggregated data combined with low-level detailed data empowered employees from any level of the organization to use the data to effect change. In fact, the OS/390 data warehouse transformed the company's technology units: Instead of discussing the validity of the data, the units were actually solving problems. Blue Phoenix was so impressed by the innovative use of C-Discovery that it purchased the solution from Merrill Lynch.

Once the OS/390 data warehouse was available to all employees, many cost saving and cost avoidance programs were initiated and completed. Examples include:

Eliminating unused datasets, reducing overallocated datasets, and deleting datasets attached to employees who are no longer with the firm. Estimated cost savings: $2,300,000.
Reducing the number of DB2 DASD and Image copy datasets. Estimated cost savings: $5,000,000.
Expediting the decommissioning of a mainframe campaign system. Estimated cost savings: $250,000.
Eliminating, consolidating, and replacing vendor products within the data center resulting in reduced licensing fees and complexity.

The OS/390 data warehouse has also improved the effectiveness of outsourcing engagements. Outsourcing partners have access to the OS/390 data warehouse and are able to streamline their impact analysis procedures. Providing the information in a self-service model to our partners improved the effectiveness of both our in-house staff and outsourcing vendors.

The ability to view a job's predecessors and successors via the Web reduced production error rates and eliminated the need to log on to TSO.

The Six Sigma team uses the OS/390 data warehouse to provide detailed metrics for numerous strategic cost reduction efforts, such as data center automation, outsourcing, application decommissioning, and so on. The OS/390 data warehouse contains the information required to assist the Six Sigma specialists, called black belts, in acquiring and analyzing the data for their projects.

The ITDW
The next phase was to tackle the world of distributed systems. Managing the distributed environment was always a daunting task, as a complete picture was difficult to obtain. Whereas the mainframe environment is highly centralized, with key information stored in a few locations, Merrill Lynch has literally tens of thousands of distributed assets on heterogeneous platforms spread around the globe.

Each asset contained an agent that collected some — but not all of — the information required to populate the database. Using a heterogeneous product collection strategy allowed the team to leverage existing products and build a comprehensive repository quickly rather than searching for another solution. The products used to acquire the information were Tangram Asset Insight, NETIQ, BMC Patrol, Microsft SMS, and Computer Associates AMO.

Another challenge in sourcing the ITDW was that each product had its own repository. For example, Asset Insight used an Oracle data store, whereas NETIQ used SQL Server and Patrol used a proprietary file structure. Collecting information required a different ETL process for each source. To simplify collection, the team used the enterprise information integration tool DB2 Information Integrator (DB2 II), which allows centralized access to federated data stores. The location and type of data store was transparent to the ETL application developer. All differences between the heterogeneous data stores were fully abstracted from the ETL process. The team could use a single unified SQL dialect. Using DB2 II simplified a multistep and complex ETL process.

The team developed a normalized data model to store the various subject areas associated with distributed assets. The core entity of the data model is an asset. The keys used to identify the asset are asset name, serial number, and an internal attribute called tag number. The tag number is a 1D bar-coded metal label affixed to every asset. This tag is used with an external scanner device to allow quick and easy physical audits.

The lightweight database DB2 Everyplace used with a Pocket PC-compatible PDA and a detachable scanner enhanced the team's auditing capabilities. Facility groups had been using antiquated single-purpose gun scanners and uploading the data to nonintegrated data stores. With the new process, our facility group has a multipurpose device that's linked to a central repository via a synchronization process.

Another benefit of the PDA approach was to improve knowledge of the infrastructure environments during denial of service security attacks. During such attacks, staff had relied on out-of-date spreadsheets to make critical decisions. With the PDA, the infrastructure staff can determine who owns the affected box, whether it's a test or production machine, and the IP segment it resides in.

Other key data subject areas linked to an asset are storage, security patches, processes, IP address, asset owner, software, databases, and so on.

Using the existing data warehouse infrastructure paved the way to a quick return on investment. All aspects of an asset, from beginning to end, can be tracked and recorded in the database. The organization can now focus on delivering solutions rather than on one-off technology projects. CIO Magazine recently selected Merrill Lynch as a winner of the Excellence in Technology Awards for 2003 based on the ITDW achievements. The award is given to companies that demonstrate an ability to generate greater value with limited IT resources.

ITDW Success Stories
The implementation of the ITDW environment created a single consolidated view of the distributed environment. Leveraging this environment empowered employees from any level of the organization to achieve their objectives. The ITDW has been used for the following projects:

Application impact analysis as well as decommissioning
Server consolidation
Virus identification (such as NIMDA and SQL Slammer) and notification
Discovering underused hardware by combining asset information and performance metrics
Operating system migrations
Vendor penetration reporting by software, hardware, or model
Vendor negotiations
Disaster recovery planning
Maintenance contract auditing as well as negotiations
Operational efficiency improvements.

Great Returns
The OS/390 and ITDW environments helped use its data assets efficiently and effectively. Data warehousing methodologies and practices allowed IT personnel to unlock the potential of their data through business intelligence tools. Treating the data as an asset allowed Merrill Lynch to achieve its tactical and strategic objectives faster, better, and cost effectively. Now all employees can turn raw data into strategic information. But the story doesn't end here: New data subject areas are constantly being added to meet any emerging need.




Howard Goldberg, author of the original article, is a vice president at Merrill Lynch, a leading financial firm. You can reach him at howard_goldberg@ml.com.

http://www.db2mag.com/story/showArticle.jhtml?articleID=18901174
----------------------------------------


Merrill Lynch is a leading global financial management and advisory company with a presence in 44 countries and total client assets of about $1.8 trillion. The investment banking group at Merrill Lynch relies on its data warehouse to help identify non-intuitive market trends and formulate strategies for improving business success. To deal most effectively with the great many disparate sources of data that populate this warehouse, the group selected data integration tools from Informatica. In Sun Enterprise servers, the firm found the ideal complement for Informatica technology since the product combination scales gracefully as a unit and meets all the requirements for enterprise-class deployment.
http://whitepapers.techrepublic.com.com/casestudy.aspx?docid=141176

------------------

4. www.dtcc.com/downloads/products/gca/merrill.pdf

------------------------------

5. Merrill Lynch Goes VoIP
February 10, 2005
Merrill Lynch reported that it will install IP PBX and phone gear from Avaya and Cisco with the U.S., Brazil, Australia and Japan headquerters installing an Avaya IP-PBX supporting 10,000 employees and their branch offices using Cisco IP phones with a centralized Cisco IP-PBX that will support 14,000 financial advistors.


When a huge financial firm like Merrill Lynch puts its trust in VoIP and sees it as a secure technology when you consider all the possible liabilities due to the multi-billion dollar transactions they do daily, one has to stand up and take notice...


It is also significant because Merrill dropped Cisco as it's VoIP provider for 7,500+ workers in its New Jersey headquarters and in Japan in mid-2003 due to security concerns. I guess Merrill Lynch got over that mental hurdle or Cisco and Avaya did a great job selling them.



http://blog.tmcnet.com/blog/tom-keating/voip/merrill-lynch-goes-voip.asp

---------------------

6. Risk management seen as key to IT security

10 Mar 2004

http://www.computerworld.com/securitytopics/security/story/0,10801,90987,00.html

In IT security, emotional reactions, panic and
legislation are counterproductive. But intelligent risk management can
enable organizations to face an uncertain future optimistically.
That was the message from Merrill Lynch & Co.'s security chief to
attendees at Computerworld's Premier 100 IT Leaders Conference here
yesterday.


David Bauer, first vice president and chief information security and
privacy officer at Merrill Lynch, gave his audience a historical
perspective on the evolution of IT security, starting with the Morris
worm attack of 1988. That attack took the Internet by surprise, he
said. There were no tools to fight back and no source of reliable
information. Responses were uncoordinated, and the result was
"complete havoc," Bauer said.


He contrasted that with the Mydoom attack last month, when Merrill
Lynch combined good tools with a coordinated and carefully planned
response to understand and contain the threat after just one
infection. That attack, he said, was "just another event."


"The difference between then and now is tremendous," Bauer said, "and
preparation is the key." Preparation requires a focus on risk
management, intelligence-driven prevention and response, security at
the data-object level and a focus on both the corporation and the
individual consumer of technology.


"It's easy to get somebody's password, so make the damage that can be
done by an individual as small as possible," he said.


Bauer also suggested that, since IT security is fundamentally a
technology problem, it should be handled within the IT operation.


Merrill Lynch's IT security strategy is built around strong
organization; threat management, including intelligence, planning and
instant response; comprehensive security services; attention to public
policy, including active attempts to educate legislators; and agile
response to the changing risk environment, he said.


A key component of that strategy is dynamic risk assessment. Using
tools such as scanners, log analysis, risk metrics and asset
inventory, Merrill Lynch's security group produces a biweekly security
brief analyzing and prioritizing current threats. "That allows us to
go from a circle-the-wagons approach to intelligent risk management,"
Bauer said.


In response to audience questions, Bauer said that as a percentage of
the IT budget, Merrill Lynch's security service costs less than that
of any competitors. "It's not about how much you spend but how well
you spend it," he said. "We're not making vendors rich, but if we buy
something, we use it."


He also noted that about half of his spending is advisory, helping the
company build secure systems, while the rest goes toward risk
management, prevention and response.


Bauer addressed the problem of legislation, which he said drives up
costs and takes resources away from actual risk mitigation. "Part of
our strategy is our Legislative Watch," he said. "We try to keep ahead
of legislators and influence them, if not to cancel legislation at
least to word it properly." He urged all corporations to do the same.


Looking ahead, Bauer predicts that the threat picture will be
"interesting." But with defenses built around thoughtful planning, he
said, "I'm optimistic about our chances for success."


http://seclists.org/isn/2004/Mar/0053.html
--------------------

2007

7. Merrill Lynch on the grid in race to speed up apps

30 Jan 2007

Merrill Lynch has developed an enterprise computing grid that allows it to run applications 800 times faster than previously by putting to work the power of disaster recovery servers and other under-utilised resources.

The investment bank plans to use the grid to run simulations and risk analysis for high value derivatives trades. Ten applications are currently being run, but the bank plans to have 30 running by the end of the first quarter of 2007.

How fast crucial calculations can be made has a material impact on profitability. "If you are looking at a £200m deal and it takes 600 hours to run the calculations, you need to get this down to an hour," said Juan Lando, who heads up the grid centre of expertise at Merrill Lynch.

Using dedicated servers led to an under-utilisation of hardware, as users needed to over-specify servers to cope with peaks in demand, Lando said. "Management wanted to avoid having to build datacentres," he said. Accordingly, Lando's team looks to identify and move suitable applications onto the bank's grid.

The grid works because applications are all written to a common standard, said Lando. It uses Red Hat Linux and Windows operating systems, Gemstone for data caching and Datasynapse for management and the grid programming environment.

Applications are developed either for Microsoft .net 2.0 for Windows 2003 applications, or the standard Java Enterprise virtual machine for Red Hat Linux.

How the grid works

Merrill Lynch's strategy for running applications on the grid is known as "intelligent scavenging".

While some users try to make use of spare desktop PC capacity, the investment bank prefers using datacentre processors to boost processing power, because the datacentre hardware runs in a standard environment.

The Merrill Lynch grid implements a "follow the moon" policy to use free datacentre processing power at the end of the day, when the servers are being used less.

It takes advantage of servers in disaster recovery sites and datacentres, which would usually be unavailable for normal business use.

The software monitors keyboard and mouse activity, automatically releasing the disaster recovery site from the grid whenever a system administrator needs to access the site.

There is also a "red button", a script that the system administrator can run to manually disconnect the site from the grid if a disaster recovery plan is invoked. This approach gives applications a massive amount of processing capacity to tap into.

In one instance, Lando's team assessed a complex Excel model and the developers recoded the calculations in C++, allowing them to run on the grid, and so improving the performance of the spreadsheet by 10,000 times.

http://www.computerweekly.com/Articles/2007/01/30/221464/merrill-lynch-on-the-grid-in-race-to-speed-up-apps.htm

-----------------------------------------------------

2007

8. Salesforce.com saw profits fall on increased revenue, but CEO Marc Benioff is bullish as ever and this week announced the company’s largest customer to date. The on-demand market leader revealed in New York that it now has a client with 25,000 subscribers - its biggest ever in the shape of Merril Lynch.

Merrill Lynch will be using Salesforce Wealth Management Edition which the firm hopes will take a chunk of Bloomberg’s market share. The software builds in client and prospect profiles, workflow, approvals processes and other necessary functions on top of the core CRM features.

"Merrill Lynch's decision to embrace on-demand is a clear indicator that the largest, most complicated, most technologically sophisticated deployments are now moving to the new model," said Benioff.

http://www.mycustomer.com/cgi-bin/item.cgi?id=132834&u=pnd&m=phnd
-----------------------------
9.Merrill Lynch transforms IT
May 2007

SOA strategy puts client at centre
Financial services organisation, Merrill Lynch is transforming its IT management and providing tangible benefit to the business with a service oriented architecture (SOA) strategy.

Merill Lynch head of global infrastructure solutions, Diane Schueneman said the rapidly changing industry complexity, driven by increased regulation and competition, as well as higher client expectations, had led the company to re-think its business and operational infrastructure.

“A couple of years ago we opened accounts for customers and used those accounts as wrappers for products that our advisors would sell to broad customer segments that we thought were all broadly the same,” Schueneman told delegates at the user conference of SOA vendor, Tibco.

“The focus was on product,” she added. “But we have gotten rid of complexity by focusing more on the clients’ needs throughout the client lifecycle. The key is consistency of process, to provide ‘solutions’ to customers that meet their ongoing needs, as opposed to products.”

The company has removed operational and technology silos between business lines using SOA and business process management (BPM) products from Tibco and improved the flexibility and agility of its infrastructure to bring new products to market or respond to regulation quicker.

Schueneman said moving to a global sourcing model would see 60% of Merrill’s 15,000-plus operations staff move into two lower cost, centralised operational hubs, while the use of Tibco technology has helped rationalise and integrate core processing activities that are more scalable and lower in cost to maintain.

“Customers want more self-service, competitors like hedge funds can be customers too so innovation and speed to market becomes more important and regulation are really challenging what we do every day,” she said, citing the 12 million emails a day the organisation produces as an example of the increased reliance on technology for key business enablers. “I have to be able to recall [these emails] within 30 days for the regulators.”

http://www.cio.co.uk/concern/alignment/news/index.cfm?articleid=1223
------------------------------------

No comments: