PICS – Plant Integrated Computer System
Electronic Visions, Inc. (EVI) originally created PICS to replace an existing plant computer at a nuclear power plant. The original system used two MODCOMP mini computers to coordinate the collection of data from sensors throughout the site, produce values derived from the sensor readings and present the data in the form of both text and graphical reports and displays. The original system had a primary and backup computer, but there were times when both could be out of service. PICS was created based on the customer's need to separate all of the data collection, value derivation and other functions into smaller subsystems so that the complete failure of a subsystem would not affect other subsystems, allowing the plant to continue operating, though in a degraded mode.
Basic Facts
- Runs on economical PC hardware
- Core system software runs under Windows
- Front-end, data collection software runs under DOS, linux, and/or Windows
- Distributed, highly configurable
- Fault-tolerant
- Mature (development began in 1993, the first installation went live in 1997)
Definitions
subsystem The basic configuration unit within PICS. Subsystems may be created for any purposes necessary to fulfill the needs of the site. For example, most sites have a point database subsystem, a computed/derived point subsystem, a data archiving/retrieval subsystem, a data collection subsystem, a bridge subsystem (to provide external access to the data) and some number of display subsystems. node A single computer in PICS. Typically, a node is one of two nodes that comprise a subsystem, though a subsystem may have only one node. Node may also be used as a generic term for a PICS WAN client. 8800 Generic EVI designation for a remote PC that interfaces with front-end gear to collect data for PICS. To date, 8800's have been created to interface with custom hardware, MODACS™, AVCO™, CPI™ and DM-200™ devices. section A group of PICS nodes that may continue operating (in a slightly degraded state) even when physically disconnected from the remainder of the PICS network. At a site with several control rooms, each monitoring different systems, this allows each control room to be a separate section and to continue receiving and displaying that control room's data even if the rooms link to PICS is severed. When the link is restored, the section will automatically "rejoin" PICS.
Basic Design
PICS was designed from the ground-up to be a very modular distributed system. Every PICS node runs a basic set of core programs plus one or more additional programs, as defined by the node's (or subsystem's) purpose. A fully functional PICS can be configured to use as little as one subsystem (or even one node, if backup is not required) though most sites choose to have separate systems for the point database, data collection, historical archiving and operator activity nodes.
PICS modularity exists at several levels at once: programs, subsystems/nodes, and sections.
Program Modularity
Every PICS program performs a single, specialized function and interacts with PICS using a set of standardized programming interfaces (APIs) that were designed specifically for PICS. This allows us to mix and match programs as needed by each individual site to balance load and criticality/availability.
One result of program modularity is that it is relatively easy to create subsystem configurations that EVI never thought of and, in fact, this is how the first BRIDGE and REPEATER subsystems were created. PICS has a central point database and a server program (sdserver) that makes the database available to all other nodes. Once customer decided to run the server on a client to provide access to PICS data from outside the PICS network and the modular nature of the programs allowed that configuration to work exactly as the customer expected – even though the developers never envisioned it.
Subsystem/Node Modularity
Each subsystem may be assigned a number of roles, depending on a site's criticality/availability requirements. Some subsystem roles may be duplicated by multiple subsystem (particulary things like historical archivers/servers.) In a primary/backup subsystem, the backup will typically take over less than one second after a failure of the primary is detected. In peer subsystems, both nodes operate independently.
By having every subsystem on a different pair of (inexpensive) PCs, the functionality of the old mini-computer system is distributed and the reliability is improved because a hardware failure no longer affects then entire system.
Section Modularity
A PICS may be logically divided into a number of sections, typically based on physical networking structure. This can allow an isolated section to continue operating on whatever data is still available to it until it can rejoin PICS. At one large site, this allowed each of the six major control rooms to be designated a separate section and each section has its own data collection system for the plant systems it monitors and controls. This way, if a network issue temporarily separates a control room from the rest of PICS, the operators still have access to the local data they need to perform their jobs (though they can no longer see data from the rest of the plant and the rest of the plant has lost their data.)
Core Components
PICS is an extremely configurable and extensible system, which makes many of the core components optional. The only absolutely required components are the Task Monitor, Point Database and Real Time Database. The list below includes the most common additional components as well.
- Software Verification - All of the PICS software,
configuration, and other unchanging (or very rarely changing) files are
maintained in a special file called a verification library (VLB).
- Server (vserver) - Validates the local VLB and provides access for clients
- Client (vclient) - Accesses a server to ensure all files targeted to the specific PICS node is present and up-to-date, downloads and updates any files that are not.
- Verification "Push" - an optional verification
component that runs continuously and actively updates shared files. This system is not
targeted to specific nodes or subsystems - all files maintained here are
verified and updated on all nodes running the client.
- Server (vpushh) - Monitors a special set of directories that contain the files to be verified. When a file in this directory tree is modified (or a new file is added,) all of the clients are immediately notified.
- Client (vpushc) - At startup, all of the files are verified and updated as needed. Thereafter, any time a notification is received, the associated file is downloaded and updated.
- Control - Subsystem/node/task management, control and
monitoring.
- Task Monitoring (taskmon) - The PICS Task Monitor is responsible for starting and monitoring the health all of the automatic tasks. Other tasks may be manually or otherwise started and any of those that register with the PICS Task Monitor will also be monitored for health (until they inform TaskMon that they are terminating cleanly.) The PICS Task Monitor also manages the primary/backup aspects of non-peer subsystems.
- Watchdog (watchdog) - Monitor the health of the Pics Task Monitor. If the PICS Task Monitor unexpectedly terminates, appears to hang, or otherwise disappears, the watchdog will attempt to reboot the local computer.
- Point Database - contains all "static" (i.e. not scanned or computed)
data about each point in the system. Also contains additional tables used
for things that need centrally managed configuration such as message
translations, color sets, display elements, etc.
- Core (sdb) - manages the actual database files, coordinates changes, creates and maintains the cached database image that is used by all other applications.
- Server (sdserver) - distributes changes and cached database images, accepts change requests and forwards them to the core.
- Client (sdclient) - manages each node's copy of the database cache, alerts applications to changes, accepts change requests and forwards them to a server.
- Real Time Database - contains all "live" (generated by scanning or computing)
data about each point in the system.
- Standard Agent (rtdba) - collects incoming data from local applications, broadcasts data to all PICS nodes, receives PICS data broadcasts and distributes received data to local applications.
- Remote Server (rtserver) - acts as a local client to provide a TCP/IP service for distant remote clients to receive a directed stream of PICS real time data.
- Remote Client Agent (rtclient) - connects to a server to receive a stream of PICS real time data for delivery to local applications.
- Data Collection Management - EVI's 8800 series of data collection systems
require a management/control system to interact with PICS
- Control (muxctl) - Determines which 8800 (in an A/B pair) is currently primary, proxies data for 8800 pairs that have "gone away," sends commands to 8800s (e.g. reboot, stop scanning, etc.), collects performance data from 8800s, provides data for 8800 status points in the real time database system.
- Database Management (muxsdb) - interfaces with the local point database agent to create and maintain the data tables necessary for each 8800, sends table updates to 8800s as needed.
- System Management - tools and utilities that help a PICS manager to see the
state of the system, point data, performance info, etc..
- System View (sysview) - Provides a detailed look at all of the nodes, subsystems, services and tasks in a running PICS. May also provide performance info about 8800s when executed on a data collection node.
- Cache View (sdbcview) - Provides a record-by-record look at all of the records in the point database.
- Real Time Data Monitor (monitor) - Provides a way to see the real time data stream on a node.
- VLB Creation (vcreate) - Creates a new VLB using an input script that lists all of the files targeted to all of the nodes. Typically, the input script is generated by a series of batch files (or command scripts) because it is much easier to update shared scripts than dozens, hundreds or even thousands of individual file entries in the complete configuration list.
Historical Archivers
PICS has two very different data recorders that provide other applications access to data from earlier times.
- Plant Data Recorder System (PDRS) - Records the entire
PICS Real Time stream to a series of hourly files. This provides access to ALL data that was received by the
archiver node.
- Archiver (pdrsarch) - Receives and records the PICS real time data stream to a series of hourly files. Each file begins with a special record containing the current value of every point in the system at the time the file was created.
- Retrieval (pdrsrtrv) - Can act as server, client or both. As server, pdrsrtrv provides a TCP/IP service that clients may use to request and receive historical data. As a client, may access either local PDR files or a server to collect historical data into various file formats.
- Client DLL (evPDRS.dll) - Applications may use the client DLL to perform their own retrievals for historical reporting, graphing, etc.
- Data Archiver System (DARS) - Records periodic snapshots of all
points in the database. This archiving system was created to provide a relatively quick way to look through large amounts
of historical data to locate likely ranges for events and once an event is located in time, then the PDRS would be used
to extract all data around the event for analysis.
- Archiver (darsarch) - Records snapshots of current values from the PICS real time data stream at customer-defined intervals to one to eight different files. For example, a customer may have one second, ten second, one minute, one hour and one day files. The files are circular and fixed size, so the best resolution would only be for a short time prior to "now" but the worst resolution might go back for years.
- Server (darssrvr) - Provides a TCP/IP service that clients may use to request data about and from the DARS history files.
- Client (darsrtrv) - Accesses the server to collect historical data into several different file formats. Some applications have their own internal clients as well, for example Recall Display can use DARS to backfill a real time data graph with historical data, providing some reference for the initial live values plotted.
Display Components
Data displays for operations and engineering analysis are a fundamental part of a plant computer system. PICS has several standard display programs and many more that were custom-designed by customers to meet their specific needs.
- Recall Display (redisp) - can display PICS data in several formats: alpha-numeric tables, strip graphs, and X-Y graphs. The graphs may use live data and/or historical data from either historical archiving system. Because PICS carries both the engineering units and raw values, Recall Display also becomes a very useful tool for maintenance by allowing the hardware maintenance engineers to look at raw sensor data when looking for problems.
- Operator Programmable Alarms (opal) - provides a soft, customizable annunciator panel. The number of row and columns is adjustable along with which points are assigned to each annunciator (and the alarm levels that trigger different colors in the annunciator.) OPAL is also capable of replacing annunciators with graphical meter displays to visually show the current value.
- Alarm Log (alarmlog) - provides a display that lists all of the current alarms in the system. The alarm list may be presented in a simple format, as would have been written to a printer in the past, or in a sorted format that shows the most recently received alarms at the top of the list.
- Alarm Sort (alarmsort) - a different type of alarm logger that groups alarms into different categories, for example, critical alarms, high alarms, low alarms, temperature alarms, etc. Within each category, the most recent alarms are at the top of the lists. The most recent alarm is also displays at the top of the program window, outside of any of the category lists.
- DataViews® Interface (picsviews) - uses the DataViews package to render complex active drawings of plant systems that are driven by PICS real time data. These drawings may be zoomed, panned and they may also be made "clickable" to switch between views. Views may even contain triggers that automatically zoom to a specific subsection, load a different view, and much more.
Feature Components
These programs add specialized capabilities to PICS, beyond the basic functionality of a typical plant computer system. All of these programs are optional.
- Web Server (picsrpg) - Originally created to provide some basic, standardized reports in response to simple queries, PicsRPG has been expanded to include a template language that allows creation of complex pages containing live data from PICS. When the template language is combined with a browser scripting language (e.g. JavaScript) very complex pages and graphics may be created.
- Email Agent (pema) - Sends email to an address (or a user-definable mailing list) when a point performs a configurable transition. For example, if a pressure exceeds a limit, then a mailing list is notified of the event.
- DDE Interface (rtdbdde) - Provides access to PICS real time data for programs capable of using Windows Dynamic Data Exchange. This is a somewhat old system, but Excel still supports it and may be used to create all sorts of reports, charts and graphs from live PICS data.
- Remote Access Control - Designed to provide password protected
remote access to PICS services and data that is exposed through a bridge subsystem.
- Server (acb) - Provides user credential validation and PICS configuration info to the client.
- Client (acc) - Provides a user interface for PICS, runs the local taskmon and provides PICS configuration to taskmon so that it thinks the local node and the bridge subsystem are an entire PICS.
- Extensible MMI (picsmmi) - Created to provide a framework
for future user interface functions, PicsMMI uses custom DLLs (called PMA files)
to provide all of its useful functionality. A few of the PMAs created are:
- Operator Control (opcon) - Provides an easy-to-use interface for basic point control operations like on/off scan, manual substitution, etc.
- Point Editor (pedit) - Allows changing a point's record in the database.
- Point Display/Trend (point) - Provided as an example of one to create a PMA, this extension offers two features. The first displays a complete translation of all of the data in a point's real time data record. The second displays a trend of the point's current value as a series of text output lines, one per sample (with the samples being at a specific time rate or any time the value changes.)
Custom Components
Every site has specific needs and desires that are different from all other customers. EVI has been able to extend PICS by creating custom components as needed by our customers. In addition, some of the PICS applications are designed with extensible interfaces allowing customers to create their own new functionalities by creating new DLLs that those programs can use. When PICS is replacing an existing system, we create custom components to replace and replicate (and sometimes improve and enhance) components from the system being replaced.
- Safety Parameter Display System - Created to replace and
enhance an existing system.
- Display (spdsdisp) - Not only did this program need to look like the program it was replacing, it also needed to interface with control buttons on the control panel.
- Compute (spdscomp) - Special program that derived all of the calculated values necessary for the display.
- Archiver (spdsarch) - Special archiver, designed to make finding the data leading up to plant trip events (and later, other important events) quick and easy.
- Computed/Derived Data Points
- Balance of Plant (bopcomp) - implemented customer-provided algorithms to calculate critical values for managing the reactor and other plant systems.
- Generalized Compute (picscomp) - provided many standard computations (like polynomial calcs, thermocouple conversions, etc.)
- Redundant Instrument Monitor (rim) - implemented customer-provided algorithms to continuously monitor redundant instrument sets to missing/malfunctioning instruments and sensors.
- Graphing/Display
- Gradient Plot (gplot) - Replaced a very old custom program (written in BASIC to run on DOS) with an enhanced version that is driven by data from PICS. As with many of this type program, one major goal was to look as much like the original as possible. This program succeeded in matching the look and feel of the original so well, that it was running for more than a month before anyone noticed it was not the old program!
- PGPLOT Replacement (pgplot) - Replaced a graphing program that ran on the old MODCOMP® and wrote graphs to a Tektronix® graphics terminal. Once again, matching the look of the old graphs was a critical requirement that was achieved quite well.
- Utilities
- File Archive and Cleanup Task (fact) - Archives old files in a directory to an FTP server and deletes any files older than a configurable age. BOTH functions are optional, so this program may be configured to clean up after others that write many files and/or to archive copies of the files to an FTP server. This is useful for jobs like moving old PDR files to long term storage and deleting the oldest PDR files to ensure that the active PDR drive doesn't overflow.
- Automatic File Transfer (shifty) - One of several unique programs created to automatically transfer reports and data files generated by certain programs to an FTP server. The FTP server could be another machine at the site or even an off-site backup facility.
- MODCOMP® Console Replacement (mcr2) - Created to provide a nearly exact duplicate of the command line interface that the plant operators were familiar with in the system being replaced. As PICS was phased in and operators were trained to use the Windows platform itself, they requested more and more graphical functions to replace/enhance the more cumbersome command lines and MCR2 was updated to include the new functionality.
Hardware Requirements
The bulk of PICS will run on just about any hardware that will run Microsoft's Windows operating system. The current version of PICS is currently running on Windows 2000, Windows XP, Windows 7, and Windows Embedded Standard 7. The oldest active PICS is running on Windows NT4 Service Pack 5 and has been running 24x7 with virtually zero unplanned down time since the late 1990's – but the latest version is even more reliable!
Most recently, when we updated a site to use Windows Embedded Standard 7 (32 bit) on all of the PICS core and display machines, the new machines were mostly single core 1.6-1.8GHz x86 processors with 2 or 4 GB of RAM. After the machines were converted, they were monitored for performance and we discovered that even with almost 15,000 points total (over 6,000 changing per second) total CPU utilization was normally well under 5%.
Most of the 8800 systems (front-end data collection) used with PICS run on custom made systems based on single board computers running a custom built linux kernel. There are still some 8800 systems running DOS as well as a few data collection programs that run directly on a PICS node under Windows (for example, EVI generalized ModBus point interface for PICS.) EVI is often asked to create a custom 8800 system for a site so that the existing sensors and sensor interfaces may be kept, saving money (and allowing for a easier comparison/testing/validation between the old and new systems, since the same sensors are used by both.) In some cases, the existing data collection devices were also replaced by EVI using custom built systems that mimic all necessary functionality of the old systems at both the software and hardware levels.
Customers and Testimonials
- Progress Energy, Crystal River, Unit 3 Nuclear Power Plant
- During creation, InfoWorld rated PICS as the #1 Client/Server Project in 1995 and we saved a copy of the article for you at http://www.electronicvisions.com/infoworld_article_reprint.html.
- For the 10 years PICS had been running at CR3, they had one of the best (if not the best) plant computer availability records of any Progress Energy power plant. And that was running the first version of PICS – the latest version is far more robust and reliable!
- First Energy, Davis-Besse Nuclear Power Plant
- Davis-Besse runs the PICS core to collect data coupled with some custom software (written by both EVI and Davis-Besse) to send the data to an Intellutions iFIX32-based display system.
- Central Contra-Costa Waste Treatment Plant
- The PICS core was used along with a custom data collection interface to create and maintain data archives for later analysis. At this site, PICS received both point and real time data from a separate data collection system using an interface designed to work with the quirks of the other data system.
- United States Enrichment Corporation, Paducah Gaseous Diffusion Plant
- The largest and most sophisticated PICS installation, made up of six different sections and over 100 PCs.
- Running the latest version of the PICS software.
- Data being collected for almost 15,000 points from seven completely different hardware systems (avco, cpi, dm-200, megawatt, line recorder, serial scale, and digital scale.)
- Over 20 custom applications, many added as additional systems were incorporated into PICS beyond those that were part of the original plant computer system that PICS replaced.
Evolution...
The PICS core has been repurposed by creating new components to bundle with it. For example, EVI's Radiation Monitor Computer System (RMCS) uses the PICS core with new applications for display, computation and data collection while the core provides system management, the point database, and real time data distribution.
If you have an project that needs to collect, distribute and display data, EVI can work with you to create the necessary modules to fit the PICS core into your system. Contact Harry "Butch" Young at hyoung@e-visions.com or by phone at (321) 632-7530
Return to EVI Home Page
User feedback is encouraged and welcomed; send comments to webmaster@e-visions.com.
This page last modified on Apr 18, 2024