Back to stories
Company Logo

Chase Manhattan Bank

// Created: 1990-1992

Chase Manhattan Bank - Director of Advanced Technology - 1990 to 1992

In the spring of 1990, I was hired as the "Director of Advanced Technology" at Chase Manhattan Bank, in the "Metropolitan Community Bank" division which managed all of their retail branch banking. In essence, it meant that I was in charge of designing all of their systems which did not directly involve an IBM mainframe or a DEC Vax computer. (Essentially, everything in the branch and back offices that did not require general accounting ledgers as maintained for banking regulators.)

At the time I joined, the imperative direction of the bank overall was to keep acquiring smaller banks nationwide, and our systems needed to support the rapid assimilation of customer and product data from those acquisitions.

One of my first projects was to examine and recommend replacement systems for the "levies and subpoenas" backoffice department which was responsible for responding to court orders and the trial processes of courts in a timely manner. When I first arrived, the entire department was set up as a bank branch, with several mainframe terminals for the entry of transactions, and two printers for printing official checks, duplicate statements and form letters. The entire remainder of the system was paper and managed by a staff of about 30 people in total. As a first step for a solution, I identified the most material risk to the bank, which was failure to timely respond to orders of levies from a Court. Imagine what a Court would do if the target account had sufficient funds the day the bank received a levy, but the bank paused several days in paying, and the account had been drained by the account holder in the interim. In those cases, the bank was still responsible for paying the levy, even if the account no longer had funds.

My design and implementation for the levies involved 4 PCs running DOS and 2 file servers running Novell Netware. The general workflow was that 2 of the PCs had high volume document scanners and a new database application to ingest all documents coming into the department. The reason for 2 PCs performing scanning was not capacity, but failover capability in the event that one of the scanners was down for maintenance (which happened at least once a week). Likewise, having 2 file servers was to manage failover and minimize downtime. The database created by those 2 PCs was backed up and the backup sent offsite every 3 hours, which was a target based on the department's ability and capacity to redo the scans in a timely manner if data were lost. Those two scanning workstations also had a 3270 (LU2) terminal session to the mainframe, and for every levy incoming, the workstation operator was required to immediately apply an administrative "FREEZE" on the account until a later step in the process. In order to reduce human error, a second workstation was used to review all of the work from the first workstation -- wherein the downstream workstation coded the document and account numbers, with no prior knowledge of the first workstation operators work, and the computer software compared the two encodings -- if they were identical, which was the general case, the documents were approved to go to the next step in the process. If the two encodings did not match, they were both sent to the third workstation for reconciliation, wherein the third operator was able to see the work of both predecessors, and the original document, and approve a merged version, or to deny the document and send it back for "rescanning" (often due to scanning errors, unreadable scans, or other document quality issues). Once the levy had passed this initial vetting, the next step was to debit the account(s), generate an official check and mailing envelope, then to un-freeze the account. The initial system took about 6 weeks to implement once the purchase orders were approved. After the new system had been in place and running for 6 months, the average levy was paid within 2 hours of receipt of the mail in that department, compared to 3 days in the previous system. Other than the initial scanning of all inbound documents into the database, I did not participate directly in the automation of the processing of subpoenas.

Vendor politics 101.

A general theme during my time at Chase was supporting the initiative for rapidly acquiring new banks -- these acquisitions involved outfitting each acquired branch with a new technology design which placed a Windows PC for each teller and each platform workstation, among other things, which allowed the platform officers and branch management, to create custom forms and newsletters for their clients. At that time, branch managers had a lot of autonomy and their own P&L responsibilities to the overall bank. The system proposed by IBM was around $100,000 per branch, a number that seemed preposterous to me. I designed an alternative system, based on IBM and their advertised catalog prices, which came to under $20,000 per branch for equivalent functionality. I also understood the corporate procurement process at the time, which was to fill out a form called "AR/1" (Asset Requisition), and that if the procurement amount was over a certain threshold of one million dollars, it had to go before the Board of Directors finance committee to be approved. I believe that the IBM sales team had deliberately priced their offering to be under the approval limit of individual branch managers, and that IBM would do several hundred such sales instead of a single sale for the entire retail bank. I wanted my AR/1 to go to the BOD committee for oversight, and to provide visibility to the piecemeal solution offered by IBM itself at a much higher price. Two weeks later I learned the fate of my AR/1. I was invited to the office of the EVP who managed the entire retail banking operation, and he opened the conversation with "IBM wants me to fire you, you are causing them trouble." He went on to say that he was not going to do that, because my manager (VP), and his manager (SVP) both vouched for me and the quality of work I had already accomplished for the bank. He did proceed to give me a 20-minute lecture on vendor politics and the lay of the land, as it were. IBM at the time had an elaborate private golf course on the North Shore of Long Island, just outside of Queens. He shared that he was invited for a round of golf by the head of the local IBM sales team, and that several holes into the game they brought up my name and the competitive bid I had submitted to the BOD finance committee. The IBM sales team was feeling frustrated by my presence and my questioning of their solutions. The EVP shared with me that my AR/1 was "Dead in the Water" because he did not want to upset the $600 million per year that Chase was spending on IBM services and solutions, and that the $24 million in savings I was advocating for was a "drop in the bucket", not worth rocking the boat in the overall IBM relationship. He did invite me to bring any "big ideas" directly to him in the future, bypassing the formal chain of leadership, and offered to coach me in the larger scale corporate politics.

Another project in my first year at Chase was to mitigate what was perceived as a high risk by the Board of Directors at Chase -- that of cash management in the central vault of the bank. Generally branches managed their own cash reserves, except cash for ATM dispensing. When a branch had more cash than they needed for local operations, they sent the excess to the central vault in locked satchels with a 5-part receipt, where each of the (shipper, transport carrier, recipient, accounting, and legal) received a successively revised copy of the receipt, and any alteration was visible when the shipper's copy was compared to the legal copy. As in the Levies and Subpoenas department, the cash vault department was set up as a single branch on the mainframe system, using 3270 "dumb terminals" for tracking of incoming and outgoing cash. My first trip to the cash vault was an eye-opener, in that I had to take the elevator 5 stops below the lobby level of the HQ at Chase Manhattan Plaza ... I was told that the 5 stops was actually more than 120-feet below street level. The exit from the elevator was a small lobby, perhaps 10x15 feet, and faced a door to a "man trap". Adjacent to the man trap, unconnected to the elevator lobby, was the security officer's room, where the security officers were armed with military style automated rifles. Once you entered the "man trap", you were questioned about your business on the vault floor, and if your answer was satisfactory, the other end of the "man trap" opened to let you into the floor. The "man trap" was designed in such a way that both doors could never open at the same time due to a physical interlock. Once outside of the "man trap", I was escorted to the cash vault, which was 3 interconnected rooms. In one room, all of the incoming satchels were opened and counted, with the results being entered into a 3270 terminal. In additional to being counted, some of the cash was removed due to wear and tear or other damage such as visible vandalism of the currency. The damaged cash was bundled and sent to the Fed for replacement. Once counted, they placed the cash into 4 wheel carts which had a plexiglass container bolted to the top, and at regular intervals, a sum-count of money in the plexiglass container was handwritten on a piece of paper which was taped to the side of the plexiglass container. Those plexiglass carts were also weighed, and the weight needed to reconcile to the dollar amount within a given tolerance. Over the course of a few weeks in the vault, I would see carts with labels such as $20 million, $15 million, and smaller amounts pass by as they were leaving the first room. The second room re-counted the cash from the carts, verifying the numbers written on the side, and would bundle the cash in packets of 100 bills with a paper band around them before wrapping larger packets of $100,000 in clear shrink wrap bundles. The second room also out sorted the high quality bills into bundles of 2000 bills each as "atm-fit" currency for drawers of ATMs (our ATMs at the time used drawers of 2000 bills which were typically $20 dollar bills, at $40,000 per drawer). In the event there was not enough "ATM fit" currency, some of the remaining currency was literally washed and pressed to remove creases, folded corners, and make the bills "crisp" and ready for use in an ATM. The primary reason for this extra processing was to reduce ATM downtime due to bills stuck inside of the dispensing mechanism. The ATM network at the time had over 1000 ATMS throughout several states, and when each ATM had 2 drawers, each worth $40,000, the ATM network itself held $80 million in cash at any given time of day. Back to processing, each of the shrink-wrapped bundles was placed on a pallet, stacked up to 5 feet high, and kept in that second room until it was needed by a branch, an ATM delivery, or a transfer of cash to the FED. In gathering requirements, I was curious about the transfers to the FED, and the cash vault manager walked me to the precious metals vault which was on the same floor -- once inside of the previous metals vault, which was perhaps 40 feet deep, 30 feet wide, and 20 feet tall, I could see that the back half of the room was pallets of gold bars, stacked high over my head, and the front of the vault had a few pallets, perhaps waist height, with bars of platinum and other precious metals. What the cash vault manager showed me, through an aisle of pallets at the back of the vault, was a tunnel entrance, also guarded by a security guard with automatic rifle, which the manager told me ran directly to the basement of the Federal Reserve Bank of New York. The case vault manager also told me that gold bars were either shipped to, or received from the FED as a net-balance of that day's transactions with other money center banks. Ultimately, the system I designed was very similar to the levies and subpoenas back office system, scanning all inbound receiver copies of 5-part receipts and entering the inbound totals into both a local database and into the mainframe accounting software. The system also scanned all shipper copies of 5-part receipts for currency orders by branches, ATM network, damaged currency, and Federal Reserve transfers. The net total of inbound and outbound needed to match the net total of cash on hand in the vault, which was inventoried manually on a per-shift basis and entered into another terminal. If the net inventory did not match the net inbound and outbound, an emergency audit was triggered. I only know of one emergency audit in the year following the system's deployment, and it was triggered by someone's failure to count the currency that was in the "cleaning" process for ATM-fit currency. The system I designed and installed allowed the vault to do proof and reconciliation multiple times per day, instead of their previous "once a week" and the proof and reconciliation was actually tied to digital images of the inbound and outbound 5-part receipts -- additionally, my new system included digital accounting of the various transfers "up the tunnel" to the FED, giving a full proof and reconciliation of all system border crossings.

Tricky Business - part 1.

During my first year, I learned from a branch manager that Chase was keeping two sets of books for its real estate lending portfolio. One set of books matched the loan paperwork with the borrower, and a second set of general ledger accounting which was presented to regulators. The theory behind the second set of books was if a loan became difficult, or non-performing, account managers could change the interest rate of the loan to 0 percent, which removed it from the monthly regulatory report of "non-performing loans" I reported my observations to my chain of leadership, and I have no idea what became of the report because, even at the time I left the bank, I was still hearing rumors about the practice.

"Online banking"

At the beginning of my second year at Chase I was tasked with providing custom solutions to high net worth clients, in addition to my other initiatives. These clients included nation-states and their representative Consular agencies. At the time, an internal group at the retail bank had created what they called "online banking" for such customers by setting up a custom (isolated) CICS region with branch banking software they had modified to reduce the function points and configuring RACF and some front end processors for leased lines directly to the customer site from the Chase data center. These leased lines, with their IBM modems, provided LU2 service to those customers. Instead of the "3270 dumb terminal", many of these customers had a DOS based front-end with LU2 drivers and software which could translate from LU2 forms on mainframe generated screens to local formats including CSV. The functionality was generally limited to statements, balances, line-item details, CSV downloads, initiation of payments by official checks to a vetted list of payees, and initiation of wire transfers. Wire transfers over $5000 were verified by a callback from the bank operations center which provided a digital challenge code that the customer needed to answer by keying the code into a custom hardware key provided by the bank, then reading back the number on the display of the hardware key. After dealing with several such customers, I added to the list of requirements that would simplify their daily jobs, and proposed, then created, a proof-of-concept online banking for Windows that was eventually made available to all retail customers at the discretion of their branch manager. Among the extra features I proposed were (1) available on Windows 3.1 as a GUI application, (2) able to initiate payments by uploading a list, (3) dial up to the front end, instead of just leased lines, (4) challenge based authentication in addition to passwords, (5) ability to manage all customer specific settings, (6) later added features included multiple accounts per login session, including accounts from across various product types provided by the retail bank, and the ability to transfer money between those accounts. Some of those new functions needed new backend support, which I coordinated with the CICS application team. My first task in creating the Windows 3.0 front end was designing the graphical user interface forms that corresponded to the desired functionality. The second task was to set up and configure the communications infrastructure, from dial-up to CICS region on the mainframe. I choose LU 6.2 because it allowed two-phase commits when multiple sessions were contending for resources, it also provided full-duplex asynchronous communication, and it was robust without a complete session reset if a dial-up modem lost connection and had to redial. There were two main challenges in the communication setup - (1) our mainframe sysop community had never done LU 6.2 setup, and (2) there were no IBM SNA drivers for Windows 3 at that time. As for the first problem of mainframe LU 6.2, after being stonewalled by IBMs sales team, I ordered a bunch of "Red Books" from IBM covering their communication topics in SNA, MVS, RACF, and CICS, and I was able to find one mainframe sysop who trusted me enough to test my configurations under his authority. After a day or two of trial and error, we actually had LU 6.2 sessions working through the entire tech stack, with dial-up modems, into DOS with IBM's DOS drivers for SNA. Once I had LU 6.2 into DOS, I actually read Microsoft's documentation on Virtual Device Drivers, and was able to build a bidirectional shim that wrapped the DOS SNA device driver as a Virtual device driver which communicated with a Windows Application via a bidirectional message queue. This included mapping memory, via virtual paging, and physical paging, between: (a) DOS, (b) Windows Internals, and (c) Windows Application. Once the communication stack was running on Windows and the mainframe, I was finally able to build the application business logic into the Windows application. Looking back, from an MVC perspective, the mainframe was RESPONSIBLE for the business logic, and the Windows Application USED the same business logic to prevent errors from reaching the mainframe code, with the Windows Application implementing workflow, field validation, and form level validation prior to submitting those form's contents via LU 6.2.

Meeting Cost - Tracker : A Lesson in Process Efficiency

Chase routinely held 2-hour meetings with 10-20 high-salaried employees just to approve $10,000 expenditures—meaning the cost of the meeting itself often exceeded the amount being approved. To expose this inefficiency, I did the following:

  1. Set up a computer outside the meeting room for attendees to confidentially enter their annual salaries.
  2. Displayed a running tally of the meeting’s cost in real-time as people debated the expense.
  3. Created an undeniable, data-driven case that the approval process itself was more expensive than the decision being made.

"Screen scraping", Middleware, and Data Dictionaries

In 1990 there was essentially no such thing as a networked API for OLTP systems that wanted to talk to each other, especially of those systems were different breeds, such as IBM CICS to VAX, Windows, DOS, or UNIX. The next best thing was IBM SNA network stack with what was called "LU2", which was somewhat similar to TELNET with structured data for 3270 terminals. At least each screen or "form" at Chase had its own unique form-id or screen-id as an anchor, which is more than I can say of many "modern" applications. To implement several of my retail bank back office projects, I rolled my own middleware layer. I started with enumerating all 3270 "screens" available from the CICS system, then wrote code which scraped the "green screen" into a "dictionary" consisting of screen_id, field_name, field_contents, field_attributes, and later field_types. I did all of the scraping on a PC over LU2 from MSDOS (my tooling also worked on OS/2 PC). The scripts detected fields on different screens which had the same names, and I used a manual process to determine if those fields were semantically equivalent. Likewise, occasionally, there were fields with different names, but semantically equivalent. I also wrote, for each semantically different field, field contents validation and "screen level" validation that knew the rules for how fields interrelated with each other. There was no formal ETL standard at the time, no XML, no JSON, no Corba, no IDL. Once the data dictionary was populated, I wrote another tool to vet it against the test region for the CICS code, and completely tested the read/write/modify properties of each field. Akin to modern day "fuzzing" techniques. My code watched the response to sending a screen -- the response could be "accepted", "field errors" (with highlighted fields), and "screen updated" which would occur if another user modified the same record and beat you to submission. Mainframe concurrency control, passed the buck to the user. I later turned the test region fuzzing tool into a watchdog observer by running it every several hours -- although the screens were relatively stable, changes were made, often without notice to other departments who used them. The watchdog would detect those changes, in the test region, somewhat proactively, and send messages to various system programmers warning them to update the data dictionary before rolling their changes into production. The data dictionary allowed programmers to synthesize new forms from existing back-ends, both for read only data, and for new data entry. The data dictionary evolved into being able to handle multiple versions of a screen as it evolved, with a rudimentary version control similar to Google's widely used protocol buffers. I later added a screen scraper for the VAX systems so that data could be modified in reverse or so that third party computers such as PCs could communicate with all of the banks backends to create new interactive forms and applications. This data dictionary started being used by multiple departments so that their users could transparently interact with multiple back-end systems without changing terminals, without changing screens, and without calling another department to verbally relay data changes.

Alternative computers

As Director of Advanced Technology, I was often approached by fledgling computer companies trying to break into the market which was dominated by IBM and DEC at the time. One such company was Kendall Square Research that had built a scalable SMP computer based on a ring topology and completely shared "main memory". Their product was interesting enough to me that I paid them a site visit at their corporate offices near Boston.

Disruption

As part of my work on online banking, I was able to requisition a new PS/2 95 with a 486DX processor as a development workstation. Once I became familiar with it, I realized that it would be able to process the bank's daily DDA batch job in about 1 minute. Up to that time, I had come to know that the bank's nightly DDA batch posting from memo-posted transactions to the full ledger took about 20 minutes and ran entirely in IBM 360 assembler code that had been written in the late 1960's by one of the tech VP who I was personally acquainted with. His code was relatively unmodified over the years -- a rock solid foundational code for a robust financial system. The nightly batches at the time represented about 600,000 memo posted transactions, applied to roughly 10 million customer DDA accounts. I made a bet with the VP original author of DDA posting that my PC would beat his mainframe by a factor of 10 -- a lot of people in the technology department took various sides of that bet. We agreed that the test data would consist of a set of 10 million synthetic accounts and 600,000 synthetic transactions which had been previously been vetted by the bank's auditors as not disclosing any customer PII. The mainframe processed the test data in 19 minutes. I designed my system to (1) sort the memo post items by account number and time of day, (2) aggregate the transactions for each account, and (3) apply the aggregated dollar amount to the customer ledger for that account. I wrote my entire demo in Borland C++. My original estimate of 1 minute was proven accurate when my system, on a lowly tower PS/2, performed the entire update in 67 seconds ... screaming by the legacy mainframe system. The demo was an eye-opener for the tech folk at Chase.

Tricky Business - part 2

During my second year, when I was hand-holding high net worth "special clients" in the setup of the new Online Banking for Windows, I was present at the Consulate General of an African nation that had hundreds of millions on deposit with Chase over several dozen accounts. They wanted all of the accounts linked for online banking purposes, with a rollup of account balances ... which was a feature request not in my original requirements. I came to learn of a practice of certain staff at the CG mission to roll over the mission's Time Deposit accounts (mostly CD) into their personal accounts, for a week, at the end of the CD term. At the end of the week, the money was placed back into the account of the CG at the bank's advertised rates. The CG was none the wiser since their statements showed continuous funds and a rollover to a new term at a new interest rate. However, the CG staffers would get 4 weeks of interest per year, on hundreds of millions of dollars, which would amount to about $5 million a year of interest paid to those staffers. The bank was aware of, and condoned, these practices because they deemed it necessary to compete with other banks for the business of these Consulates. I was told that similar practices prevailed in the commercial bank when dealing with the funds of large corporations, and the "rollover" benefits were paid the financial staff of those corporations, likely out of view of any corporate governance. The net downside to the bank was that the bank was paying 56 weeks per year of interest on each Time Deposit account in those circumstances, instead of a calendar based 52 weeks per year. An arbitrage of interest in favor of the staff who selected which banks to do business with. I reported this behavior via email to my chain of leadership.

FBI? - things that make you go ummmm

One day I was called to a meeting in the office building next door (Chase Credit Cards) by the SVP of the retail credit card business. I had no idea what the meeting was about, or who would be there until after I arrived. I saw a few people I knew from various Chase divisions, and 3 people who were introduced as FBI officers. The FBI explained that they were investigating a fraud case at [unnamed money center bank, not Chase] that had a large financial impact on a group of Chase Credit card holders, and they wanted to understand our fraud detection systems, and how the staff at the [unnamed] bank had managed to perpetrate the fraud for nearly 7 years, totalling over $3 billion, without being caught. They also wanted us to write some custom software reports that would identify which Chase customers were victims of the scheme. They explained that the genesis of the scheme came from the fact that the SVP over credit card tech at the unnamed bank had formerly been a brilliant programmer on the team of that bank's credit card operation, and that as he rose through the ranks, he maintained a close friendship with at least one of the remaining programming staff. They went on to describe the fraud, as follows: [1] identify regular customers of that banks commercial accounts, especially restaurants and service businesses that those customers frequented more than two times per month, [2] invent, out of thin air, an extra transaction by that customer at that business establishment, [3] transfer/credit those funds to additional ledger accounts in the merchant credit card operation, and then [4] credit those funds to their personal benefit via like named businesses at other banks. The FBI claimed that the fraud had been ongoing for over 7 years, to the tune of billions of dollars, and had been discovered by a new financial control system set up by that bank. The fraud worked because most people did not do a detailed reconciliation of their monthly credit card statements, and would not notice an extra transaction from a merchant that they frequented regularly several times per month. If the customer did notice [note that they were customers of banks other than the culprit bank], they could call their card issuer to report a problem -- if they were to call Chase, and if the transaction was under a certain dollar threshold, Chase would just eat the cost of a refund because the refund was less than the operational cost of starting an investigation. If the dollar amount was high enough to warrant an investigation, Chase, as their card's issuer, would contact the culprit bank's merchant operation with a form sent by mail, detailing the "faulty" transaction, at which the culprit bank would generally issue a credit, and eat the loss without further investigation due to the expense of an investigation. I do not know the details, but it was clear that of the billions stolen, not one transaction ever triggered a full investigation by the culprit bank's merchant credit card operation. I later learned that the SVP at the culprit bank had done a plea deal, receiving a 3-year jail sentence, and had to regurgitate $200 million as restitution to his employer. 3 years, $3 billion ahead ... really? Keep in mind, that at that time, the customer was always right in any credit card dispute unless the merchant produced a signed charge slip, and even then the signature on the card slip had to match the signature on record at the card-issuing bank, unlike now where the default is the merchant is "right" and the cardholder has an impossible burden of proving a negative.

Unions and Mafia

Sometime early in my second year, I was at a bank branch in Manhattan NYC, training a branch employee how the new system worked, and how to troubleshoot various technical issues. One of the issues they had is that one of the platform officers was unable to print documents to their local printer, and that the problem had already been "looked at" by their local IBM tech. I looked over the printer and its cabling, disconnected the cable from the printer, and noticed a bent pin in the cable housing. I was able to gently straighten the bent pin with a pencil, put the cable back into the printer, and it started working by spewing out a day's worth of print jobs which were queued in the PC. The next day, I was called to the office of the SVP managing the retail bank operations, and reprimanded for "performing electrical work" in the branch -- he was emphatic that any and all cabling had to be done via the IBEW Local 3, and that I would be fired if another offense was reported. Lesson noted. Another similar incident comes to mind when I was helping with an office move of retail back office operations from a Queens office to a new Brooklyn office. I noted that of the 100 new PC I had requisitioned for the old office, only about 85 had appeared at the new office. When I brought this discrepancy to the attention of my chain of leadership, I was told that 10-15% "falling off the back of the truck" was acceptable loss (or "tax") for any interoffice move within the City of New York. My head was spinning.

Treasury, Wire transfers, and lost keys.

At the time I was with the bank, the wire transfer department was a completely separate entity in the bank, serving retail, commercial, and government customers and their primary office had a datacenter in Brooklyn. At that datacenter, there were 3 computers in separate glass-walled rooms adjacent to each other. One for tallying inbound wires, one which did authorization and created outbound wires, and the third which tallied outgoing wire transfers. Each computer ran software written by entirely separate teams, and those 3 computers performed a proof and reconciliation of net wire transfers multiple times per day. For customers who initiated wire transfer by any means other than a bank branch with a low dollar transaction, and for customers whose wire transfers exceeded a threshold amount (which I believe was $100,000 at the time), the customers were required to have a challenge-response key issued by the bank. At the time, the challenge-response key was about the size of a modern smartphone, but 2x as thick at the near end, and 6x as thick on the far end (like an early 90's calculator) When the customer with such a key initiated a wire transfer, a person from Chase would call them at a designated number and give the customer a challenge code which had to be entered into the key calculator, and then the customer was required to read the challenge response from the screen of the challenge response key. Once the key was verified, the wire transfer was released to the Brooklyn data center. I was invited into that department to help in response to what was deemed a dire emergency. The wire transfer computer itself had crashed one day in a manner which was blamed on the operating system code, and an outside consultant from the computer vendor had printed a "core-dump" of the entire system ram (about 7 boxes of paper) and moved the core-dump to a nearby conference room. The core dump, and the removal from the "secure" glass-enclosed computer room was deemed a major security breach because the core-dump contained the PRIVATE KEYS of every single challenge-response key in the field, and after the removal from the secure computer room, there was no chain of custody or access log for the printout itself. This printout represented literally tens of thousands of potentially compromised customer accounts and high value wire transfers and the security breach was deemed too high a risk for the bank to ignore. The approach I came up with, which the team adopted, took into account that even in the best of scenarios, the tens of thousands of physical keys in the hands of customers would take 8-10 weeks to replace. I advocated with making a report of each customer's wire transfers for the past year, and sorting them with the highest dollar customers first (reasoning that highest dollar value of customer transactions was the highest risk to the bank if compromised). We ordered new physical keys from our vendor and started a delivery campaign with the highest dollar customers first to replace their keys with the new ones. Our team did not tell the customers about the compromise, but simply stated that the new key would be necessary for all future wire transfers. A few weeks later, one particular key came to my attention because there was no contact information on the associated accounts except a flag to refer any requests to the head of treasury operations. The accounts for they key had several hundred million dollars, but had not been touched in several years. My instructions, which had come from the audit committee, were not to issue or mail any of the new keys to bank staff or personnel, but to make sure that each key was mailed directly to the authorized representative of the client. The head of treasury operations had asked me to issue the key to him personally, and I had to refuse. The next day, I was called to the office the bank's president, and he informed me that he knew exactly who the customer was, and that he would deal with it personally. He also had a written authorization from the audit committee to proceed in that fashion, so I delivered the key to him, as instructed. I later learned that the accounts associated with that key had been directly involved with the Iran-Contra funding debacle several years earlier. I had also been hearing rumors that each and every money center bank had a highly placed CIA operative in their executive suites. Was the president of the bank such a person? I do not know for sure.

Safety, Security and Visible Controls

One theme running through all of the above, is how Chase management perceived Safety and Security through "visible controls". The bank did not invest in leading edge security measures ... our investments in security controls were actually viewed as a "marketing cost" to manage perception of safety in the eyes of customers. Armored cars? Yes. Security guards at every branch? Yes. At some point, it became cheaper to insure risks, or even hire specialized teams (read: mercenaries) to recover stolen funds, than the cost of raising the security bar. I became aware of a single fraudulent wire transfer valued at over $300 million wherein the customer expected the bank to cover their loss due to the bank's lack of appropriate security controls. Never mind that the transfer was authorized by their own Consul General (former) employee and that the transfer went to a secret account in a Swiss Bank. I am informed that Chase actually hired private investigator and a mercenary team of about 20 people (a full squad) to "capture" that former employee from their hiding spot in the Maldives, and once captured, "convinced" that person to wire the funds back to the original account. Folklore? Fact? Fiction? It was a "current" story when I was helping the wire transfer department with one of their other issues. I will let you decide, but keep in mind the apparent role of the bank's President at the time, as noted above.

Football pool and machine learning.

Near the end of my second year, I got involved in the football pool created by the tech team on the floor in my building in Garden City. It was for fun, and the bets were funny money, with recognition and reputation being the primary currency. My approach to the betting was fully data-driven, and I separated each official football team into two separate teams on the computer, one for offense, and a separate team for defense. For my data, I also had a separate line-item for each and every player. One of my primary data sources was a newspaper called the "Sports Daily" which had the key stats for the top players in each game the previous week, as well as reports of player injuries which could affect their upcoming performance. The bets were not just win-lose, but based on the spread. After manually data entering all of the relevant data from my sources, I managed all the data in a database linked to an early version of Microsoft Excel. I used calculus based on multidimensional statistics with gradient-ascent regression models and my own simulated annealing algorithms to process the data, and I scored each team's defense roster against the opposing teams offense roster. At the end of the season, my model had scored over 70% on the win-loss predictions, and was the closest by far on the predicted spreads.

Mapping the Hidden Organization

I played corporate politics like Chess or some other deep game — I made graphs of who had worked with whom, every time I heard a story — and then, when I wanted something from a particular decision maker, I referred to the graph, which although different from the current org chart, usually mapped effective communication conduits for the the planting of seeds … I could plant 5 signals at different points in the graph network, confident that many of them would arrive at the intended recipient, and they would have heard my idea 5 times before I even arrived in their office — when I brought something up, they would have an AHA moment of synthesis…

I later learned that the IBM sales office used a similar technique, and that their "maps" of who had worked with whom went back decades, across member companies in entire industries. They knew how to get what they wanted.

PC + IBM Risc workstation
RS6000
AIX
OS/2
Windows 3.0
Windows 3.1
Windows SDK
Windows DDK
EXCEL
CICS
COBOL
JCL
EBCDIC
MVS
LU2
LU6.2
SNA (System Network Architecture)
Token Ring
Novell Netware