At first blush, it might seem easy to implement an automated system to test pumps. After all, the tests are well-defined and based on standards, such as those from American Petroleum Institute (API), American National Standards Institute (ANSI) and other organizations like the Hydraulic Institute. However, while standards describe the test procedures, they do not (and should not) give details about the implementation.
The result is that across organizations—and even within a single organization—performing tests to the standard often means test control and measurement are accomplished using a variety of hardware and software, with tests automated or completely manual, and results analyzed and reported with different tools and archived in different formats.
What can be done better when implementing a pump test station? Consider these five best practices:
1. Set Reasonable Automation Expectations
A new test system may not require 100 percent automation, and it likely should not. There are many steps in pump testing that can be fairly difficult to automate. Even tasks as simple as setting flow rates can be challenging to automate depending on the valve and motor configurations at the facility. Having an operator manually configure operating points is acceptable and often done. However, for repeatability and occasional time savings, consider automating execution of the test procedures and some of the steps, such as flow stabilization. Some steps, such as establishing net positive suction head (NPSH), will likely remain manual.
Even if the setup for the test condition is to be manual, the new test system should automatically check that the condition is stable and in an acceptable range of a desired operating point before taking measurements. If not, the new test system should prompt the operator to correct the situation before proceeding.
2. Reuse Existing Equipment
If the new system is an update to an obsolete test system, consider using the existing hardware. Certainly, existing piping, valves and motors would be included in that consideration, but also check on control system and measurement input/output (I/O) hardware.
Often the control system is based on a functional programmable logic controller (PLC). However, the PLCs in use today could be old and may need to be replaced in the next few years. To simplify that future replacement, use an open platform communication (OPC) server to interface between the I/O tags and the new test PC. This OPC server acts as a hardware abstraction layer (HAL) and isolates the test application software from the specific control hardware.
Having an HAL will ease the pain when it is time to replace the old PLC since the test application software will not need to be rewritten. If the PLC is really old and does not support OPC, consider upgrading to a newer model that does. And, look for OPC-UA servers for the PLC versus an older OPC-DA server. OPC is a well-used standard that is moving away from OPC-DA.
For the measurement I/O, reuse may be more challenging due to proprietary software device drivers used in the existing test system. But, if those device drivers are accessible by a new test application, then certainly consider their reuse.
3. Extract Institutional Knowledge
The engineers and operators who set up and run the tests have a lot of experience tucked into their heads. That collection of learning and experience is called institutional knowledge. Institutional knowledge needs to be clearly understood before embarking on the design of a new pump test system. Without that understanding, the test system will almost certainly lack one or more features used by the test engineers and operators.
Start with a document clearly stating all of the requirements for the new system. One long-standing problem with gathering requirements is the implicit assumptions people make while writing the document. For example, the test engineers do not list all the steps required for sensor calibration since they assume everyone knows them.
Requirement gathering should be followed by a meaningful design review that includes the test engineers, operators and consumers of the test data.
Watch the users in action on an existing system and look for missing details not captured in the requirements document.
Be aware that extraction of this institutional knowledge is one of the most difficult aspects of building a new test system.
This information is especially critical when trying to automate previously manual procedures. A lack of detail will obscure the tradeoff between automation savings and implementation cost, and users may end up trying to automate some procedure that has no payback and/or is unreliable.
4. Storing Electronic Data
Older pump test systems often use a jumble of data file types for configuration, results and reports. The new test system should use just a few file types with consistent content across all test floors, even if those test floors are in different manufacturing facilities. This minimizes the effort to train users and allows sharing of tools. In most cases, three basic file types are sufficient: one for test configuration, one for raw test data and one for summary results. Typically, the summary data is housed in a database allowing access to pump data by serial number, test location, test operator, etc.
Data analysis and report generation can be automated to produce pump assessment information within minutes after the test is complete, possibly alleviating bottlenecks in accessing test infrastructure and increasing manufacturing throughput. And, this database can be queried for manufacturing parameters such as first pass yield and time-to-test, as well as other key performance indicators.
Raw data is usually best held in a format that combines the measurements coupled with a description of the test parameters, so that analysis and inspection can be repeated after the test has run.
5. Manage Test Configuration
For many years, one trend in test system development that is catching up to pump testing is that companies increasingly control access to test procedure definitions. In older test systems, anyone (who knew how) could change test operating points, number of test points, sensor calibrations, and so on.
Without control over the actual procedure contents, tests could be modified, creating results that might obfuscate comparison across units of the same model. Locking down the list of people who have editing rights to a test procedure will increase consistency.
Nevertheless, it is a good idea that the test operator be given the flexibility to modify certain aspects of a test procedure, such as adding some specific flow rate to a standardized performance curve test. For example, the complete API test is run with some additional operating points requested by the end user.