Home Blog Page 59

Converting Equipment from PC-Based to PLC-Based Control

New PLC panel, Image by Alicia Lomas

My company was provided an OEM label printer and applicator machine that was being controlled by five Arduino PC boards.

They were asked to provide a PLC control system, however, they insisted that they needed to use this controls architecture for the application, mainly due to interface with peripheral equipment and speed of pneumatic action and processing of IO communications.

Remote panel before upgrade, Image by Alicia Lomas

After many failed attempts to FAT/SAT the equipment and having the software developer onsite for weeks at a time, we simply could not see this system getting to where it needed to be.

The issues presented themselves in the following ways: software-based E-Stop signals, unrecoverable faults, new bugs appearing at each attempt at a performance run, and overall inability to manage the individual machine states.

It was apparent that we would be stuck with a black box that our internal controls staff wouldn’t be able to support and there would be unexpected downtime related to unrecoverable faults that require a power cycle to the individual PC boards.

The equipment gets a new brain

After buy in from upper management, I set out to execute a project where we replace the PC boards with remote IO panels and a PLC control panel.

It was important to do this with an expedited schedule and to be as cost effective as possible; after all we already spent the capital on the machine.

Although it would have been nice to hire one systems integration firm to provide a turnkey solution, I wasn’t afforded that kind of time or budget.

I ended up contracting a panel design/build shop, a systems integrator and a local electrical company.

All the physical IO was utilized, so it was a matter of getting remote IO panels to replace the individual PC controllers.

We utilized Allen Bradley components including: Guardmaster Safety Relays, a CompactLogix PLC and multiple Remote I/O racks.

New PLC panel, Image by Alicia Lomas

The challenges

Not only was it difficult to find a controls electrical firm in Silicon Valley, the installation was very time consuming and complicated.

The wires were oh so tiny; nothing was larger than 20 gauge.

Real estate in the wireways and panels was tight, therefore the electricians ended up having to do a lot of soldering in locations that caused back pain all around.

In the 11th hour I realized that I took the fact that most sensors I’ve ever dealt with are PNP.

Everything on this machine was NPN, therefore requiring a last-minute change of most of the input cards to sourcing.

There was Pulse Width Modulation Vision System Lighting that needed a controller; I went through a few different lighting control options until we found just the right output voltage and amperage for the installed LED lights.

There were some LED casualties in the process.

The Result

I really did luck out with the Electrical Contractors I found and the fact that they took pride in their work.

Everything looked clean and professional and the I/O checkout of over 200 points only required us to swap a few wires and run one new cable.

Remote panel after upgrade, Image by Alicia Lomas

We were able to convert the HMI from DAQFactory to Ignition and gain the benefit of being able to historize all tags and use trending and plant replay for commissioning and troubleshooting.

We already had Ignition installed for MES and SCADA, so were able to do this without purchasing additional software.

We simply utilized the HMI that was already there and pointed to the gateway to launch the application.

I was able to get great support from my Panel Builders and local Rockwell Distributor to get new cards and any other needed items very quickly so as not to impact schedule.

We made the machine safe by introducing a true safety circuit that included door interlocks and multiple E-stops.

The aggressive eight-week schedule was met, and the machine exceeded performance testing requirements.

Conclusion

With this project, not only do we have a rugged, robust, well performing machine, but we have a standardized control system that can be easily maintained and enhanced with the existing control staff our company already has.

There is a time and a place for PC based control systems; it just was not the right fit for our application, machine, environment and overall automation strategy for our manufacturing systems.

Written by Alicia Lomas
Project Manager, Automation Engineer, and Freelance Blogger

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Insider News for December


Note: Insider News articles & videos cover behind the scenes topics at The Automation Blog, Podcast & Show. Starting in 2021 they’re now posted at http:// TheAutomationBlog.com/join

Let me start by wishing you all a very Happy December, and thank you all for your continued support and patronage!

December’s Content Schedule:

This time of year always seems to be a challenge to schedule new blogs and videos since so many other things are going on during the holiday season.

That said, thanks to our awesome freelance writers this week we were able to bring you a couple of great articles about Project Management and Control System Health. And on next week’s schedule is an article on Asset Centre and another on Migration from SoftLogix to ControlLogix.

I also had a chance to sit down again with Jeff Brown at Mitsubishi and go over their line of HMIs, after which he was kind enough to give me a tutorial on how to program Mitsubishi PLCs.

So the next edition of The Automation Podcast will feature Jeff’s presentation, after which I hope to follow-up with a new episode of The Automation Show going over how to write, download, and test a program for a Mitsubishi PLC for the first time!

Also scheduled for The Automation Show this month is a followup to my last DH-485 episode where I will show how to read data out of several different devices over DH-485 and into a ControlLogix.

Note: I plan to continue to release episodes of The Automation Show here for Patrons to view days earlier than the public.

PC Upgrade

In addition to all that, this Black Friday I finally pulled the plug and ordered all the parts I’d need to replace my studio PC.

For Insights In Automation, I mostly use two PCs. In the Studio (in my garage) I use a Windows 10 Home i7-6700K Desktop PC for all my industrial automation software which runs inside of VMWare Workstation Pro 15.5.

Then in my office (the small bedroom in our house,) I use a Windows 7 Home Premium Asus G75 Laptop which has an i7-3720QM.

The problem is the Laptop is vintage 2012, and while I’ve definitely got my money’s worth, it’s so old that it’s starting to give me a lot of issues (and for some reason it doesn’t like Windows 10.)

I’ve also spent a lot of time this year waiting for my 2016 i7-6700k to finish rendering episodes of The Automation Show as well as lessons for my courses at The Automation School.

So after much thought and internal debate, I decided to break out the company credit card and order all the parts needed to build a new Windows 10 Pro i9-9900 based Desktop PC for use in creating my videos as well as rendering those videos.

That will fee up my 2016 desktop which I can use in my office to replace my old 2012 Asus laptop.

If time allows, I hope to film the process of building of my new PC to share with you as a “Patron only” insider video next month 😉

Store (and free downloads) Update

As a follow-up to adding of “all” of the past episodes of The Automation Minute to the store here on TheAutomationBlog.com, I’m now working on adding all of the past episodes of The Automation Show as well.

To that end, I’ve already processed up to episode 26 and uploaded them to the Insight’s S3 account, so now I just need to schedule some time to create all the “store products” here on the site.

Once they are all added, Platinum Patrons will have access to download all of them totally free of charge!

New “Swag” Planned

And the final item I wanted to share this month is my plans to add The Automation Blog and Show swag to the store.

Specifically I’m thinking T Shirts, Mouse Pads, and Coffee Cups sporting our site’s logos.

While I was hoping to have that done this month, the hectic holiday schedule my wife has planned for us will probably push it out to January.

And that’s this month’s Insider News update!

If we don’t get to talk before the holidays, let me wish you all a very Happy Holiday Season, and a safe and prosperous New Year!

Until next time, Peace ✌️ 

If you enjoy this episode please give it a Like, and consider Sharing as this is the best way for us to find new guests to come on the show.

Shawn M Tierney
Technology Enthusiast & Content Creator

Eliminate commercials and gain access to my weekly full length hands-on, news, and Q&A sessions by becoming a member at The Automation Blog or on YouTube. You'll also find all of my affordable PLC, HMI, and SCADA courses at TheAutomationSchool.com.

Alicia Lomas

Mitsubishi HMI Overview: GOT2000 (P49)

In this week’s episode of The Automation Podcast, Jeff Brown from Mitsubishi provides us with an overview of the GOT2000 line of HMIs:

For more information, check out the “Show Notes” located below the video.

Watch the Podcast:



The Automation Podcast is also available on most Video and Podcasting platforms, and direct links to each can be found here.


Listen to the Podcast:


The Automation Podcast, Episode 49 Show Notes:

Special thanks to Jeff Brown at Mitsubishi for for taking the time to review the Mitsubishi line of HMIs with us!


Vendors: Would you like your product featured on the Podcast, Show or Blog? If you would, please contact me at: https://theautomationblog.com/contact

Until next time, Peace ✌️ 

If you enjoy this episode please give it a Like, and consider Sharing as this is the best way for us to find new guests to come on the show.

Shawn M Tierney
Technology Enthusiast & Content Creator

Eliminate commercials and gain access to my weekly full length hands-on, news, and Q&A sessions by becoming a member at The Automation Blog or on YouTube. You'll also find all of my affordable PLC, HMI, and SCADA courses at TheAutomationSchool.com.

Alicia Lomas

The Realities of Project Management: The Project Czar

Image by FelixMittermeier from Pixabay

In 1975, computer architect Fred Brooks published a series of essays on software engineering and project management called “The Mythical Man-Month”.  These cautionary tales and inevitable realities not only stand the test of time, but also can crossover to many applications.

In this third article I will talk about is his essay “The Surgical Team”. The basic theme of this is that a large project should be tackled by a team, but that the “team be organized like a surgical team rather than a hog-butchering team”.

Image by FelixMittermeier from Pixabay

What this analogy means is that on a surgical team, it is the surgeon themselves that are in charge of the group. While they may delegate whatever they wish, they always have the final say over what goes on.

The recent film Ford v. Ferrari has a good example of this in practice. In the movie (based upon a true story), Carroll Shelby was tasked with creating a viable race team for Ford. The goal was to compete in and win the “24 Hours of Le Mans” race.

A former racer himself, Shelby was supposed to be responsible for all aspects of creating the team, from design to staffing to choosing the drivers. Shelby initially didn’t have success due to some corporate micro-management that was sabotaging his efforts. It was only after he was able to take full control that he ultimately was able to achieve what he was tasked to do.

The same dynamic can be true for any large programming project. There can be many layers of management and many different influences that pull the projects in different directions. This can have many negative effects – the project can be delayed or compromised, and a lot of time can be lost.

The other reality is that not all programmers have equal talent. In the computer science world, there is a type of programmer that is referred to as “full-stack”. This simply means a programmer that can do it all themselves, from front end to back end. Obviously, it would be great to hire only “full-stack” programmers, but the reality is that they don’t exist in abundance.

If you are fortunate enough to have a programmer that fits this description, consider putting them directly in charge of a large project – make them the Project Czar.

Compensate them appropriately and allow them to choose their own support staff as needed to handle the minutiae of the project. There will likely be another very competent programmer (who will one day aspire to be a Project Czar themselves) who will be the right-hand person and be leaned upon very heavily. Others can be selected for their acumen with documentation, user interfaces or whatever is needed to allow the project to be completed in a timely manner.

This approach overcomes two main obstacles with any programming project. The first was aforementioned – that some programmers are very, very good while some are, well, not so much. The second regards communication overhead, a topic covered in the first blog in this series. By having the main programmer in charge of the project, this can mitigate a lot of those communication issues (although they will always exist in some measure).

Consider this approach for your projects. While the generalist manager can be effective in some types of projects, having your best practitioner in charge may yield the best results. Specialization is in vogue across many fields and industries and this should not be an exception with managing programming projects.

Written by Carlo Zaskorski
Controls Engineer, Product Manager, and Freelance Blogger
Edited by Shawn Tierney

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Keeping Check On Control System Health

Image by Brandon Cooper


As you and I travel the roadways each day, we cross paths with many other people traveling to their destinations in automobiles of all kinds.

We notice some vehicles are well taken care, while others are not so well taken care of.

While some owners choose to take care of their vehicles with extra effort, and drive them with few issues for many years, others whom neglect their vehicles often end up with significant failures before their investment is paid for.

Image by Brandon Cooper

The same is true for control systems: We all have choice of whether to monitor the “service engine” light or ignore it.

Whether we choose to be proactive or reactive can be the difference in how well we sleep at night (so I believe being proactive is the better way to go.)

There are both hardware and software tasks that can be done to monitor what is going on with our control systems, both to prevent issues as well as give us the ability to rapidly troubleshoot issues do arise.

Monitoring Temperature

We all know that electronics and high temperatures do not mix well.

If an A/C unit fails, then the faster that it can be repaired may be the difference in your control system processor lasting ten years or twenty years.

Even if the temperature does not get high enough to shut a processor down, it could still be causing lasting damage. Repeated overheating may reduce the life of components even more.

Monitoring of Rack Room or Controller temperatures is highly important for Control Systems. Having automated emails sent to operations and/or control engineers based on high temperature will go a long way to resolving these issues as soon as possible.

Clean Air

When was the last time you cleaned the incoming or outgoing air filters on the cooling fans of your Control System cabinets?

Chances are they’re replace is overdue, but they need to be replaced at proper intervals in order to do their job of allowing fresh filtered air into the cabinet.

Air purification and pressurization systems are important to have in rack rooms as well.

Monitoring and alarming when your rack room loses pressure can also be an important step in preventive maintenance of your control system.

In many industries, if an air purification or pressurization system fails, the corrosion rate for control system equipment increases tenfold.

Not knowing the system is down for several days could cost your facility significant repairs.

Software Monitoring

There are many attributes to monitor from a software point of view, and any of the more critical ones should also include an email alert be sent directly to the control systems team for resolution.

This can be the difference in troubleshooting time measured in seconds instead of minutes or hours.

Here are a few points I’d recommend starting with:

  • Processor Redundancy Failure Alarm
  • I/O Rack or Module Faulted
  • Device Level Ring Fault
  • Operator Station /HMI Failure
  • Temperature or Pressure Failure in Rack Rooms
  • Network Component Failures

While these might sound like simple tasks, your facility might have dozens of important rack rooms to monitor, with possibly hundreds of intelligent control system components to monitor.

Implementation of each of these parameters on every device will take time, however the end-result is more sleep at night.

And at the end of the day the result will be more uptime at your facility, with lower production costs.

Written by Brandon Cooper
Senior Controls Engineer and Freelance Writer

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 


Happy Thanksgiving From The Automation Blog!

Wishing all our readers and viewers a very Happy Thanksgiving!

We hope you have a day filled with happiness, and we’re very thankful that you choose to make TheAutomationBlog.com one of the sites you choose to visit each week!

Sincerely,

Shawn Tierney, TheAutomationBlog.com

Becoming Self Sufficient: Taking Ownership of your Control Systems

Rychiger Crane - Image by Alicia Lomas

There are many challenges to having the right technical people on staff to be able to support complex control systems.

And there are often times where these technical people (automation engineers, electronic techs, maintenance electricians) get handed something and are expected to reverse engineer it.

This is not an easy task, as you’re often dealing with multiple communication protocols, different types of IO, complex safety circuits, and machine builder code (like PackML) that has a steep learning curve.

Guardmaster image by Alicia Lomas

Why shouldn’t companies rely exclusively on vendors for support?

The bottom line is that downtime is expensive.  This is especially true in certain industries and processes.

For example, certain batch processes have steps which must happen in a timely manner, and dairy must process milk timely to maintain freshness or it must be dumped.

When a company is reliant on outside support from vendors, there is only so much that can be done over the phone.

Often the issues require a vendor to travel to the site, and vendors only have so many resources.

Another point that has impacted me in food manufacturing is that most of the OEM machine builders are overseas.

While they have technicians in the states, they usually are not as familiar with the machine since they were not part of the machine building process.

Also, not all vendors have 24/7 support, and some European vendors can only support when they have personnel in the office, which could have you waiting until early in the morning to get support.

How does a company set their technical resources up for success?

There are strategies to set your technical resources up to be able to support the facility without having to add substantial head count.

The key is partnership with the vendors, setting expectations and standards, and having the technical personnel get involved early.

When engineering is seeking a vendor and negotiating the contract, it is crucial to make sure they know that everything should be open source and the source code will be handed over to the customer after the SAT (Site Acceptance Testing).

Proprietary systems and black boxes will always result in the end user becoming dependent on the vendor for support.

This can be alleviated if the end user creates an automation and electrical standard prior to negotiations, which is a proven strategy to help you insure your team is able to maintain the equipment your company plans on purchasing.

There are always going to be exceptions to your standard that will have to be discussed.  And that is okay; the last thing you want to do is force the OEM to use a technology that they’re not familiar with, after all they’re the experts on their equipment.

As much as you can standardize, however, the easier it will be for your technical personnel to support the equipment without additional time-consuming training.

As much as possible you’ll also want to include your maintenance and automation techs in design, functional specification reviews, Pre-FAT trips, FAT and SAT.

How does this impact the relationship between the customer and the vendor?

 Most OEM’s enjoy the engagement of the end user’s technical staff up front.

They know this will likely benefit them by reducing the number of service trips, and provide them the opportunity to discuss issues with someone that speaks their language.

I recently sent a saved copy of the PLC program when we were experiencing a PackML state stuck in a transition with no alarm; my controls engineer at the OEM was ecstatic and was able to quickly send over a fix!

Set up controls and electrical training when the OEM is onsite for commissioning to further ensure success and knowledge transfer.

Bonus benefits

The underrated benefit is that the in-house team can cost effectively make tweaks and add features.

For example, if production needs an additional metric or added functionality, these can be implemented by onsite personnel.

This will be a quicker turnaround and will save the facility money.  However, make sure all changes are sent to the OEM, so they have the most updated code.

Conclusion

With any control system, after the SAT the equipment belongs to the plant, with the vendor there to offer support when needed.

The ideal setup is to have a true partnership between the technical staff at the plant and the controls programmers and service techs at the vendor.  This will be a win-win for both sides.

Written by Alicia Lomas
Project Manager, Automation Engineer, and Freelance Blogger

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

The Realities of Project Management: The Second Effort

Image By Gerd Altmanna via Pixelbay

In 1975, computer architect Fred Brooks published a series of essays on software engineering and project management called “The Mythical Man-Month.”

These cautionary tales and inevitable realities not only stand the test of time, but also can crossover to many applications.

In this second article I will talk about is his essay “The Second-System Effect”. The basic theme of this is that while the first attempt at most programs or systems will be small, clean and perhaps under-powered, the second attempts will swing far the other way.

Image By Gerd Altmanna via Pixabay

As an engineer or programmer works on a first system, the ideas and logic that comes to them is derived from the specification of the task or project.

The biggest task is to create a complete system and cover all of the functionality that is required.

While it’s beneficial for the design to be clear and easy to follow, that doesn’t always mean it isn’t complex or full-featured.

In the end, what’s most important is that the final product is functional and does everything it’s supposed to do.

As the project matures, however, all the time that is spent working on it leads to a lot of thought about how it could be improved.

Customers, sales people, executives, and others with influence over the project all offer their opinions about what they think needs to be added.

When the time comes to create a successor to the original project, a new specification is required. It is at this point that all of these ideas and suggestions will resurface.

This is also where a lot of the elegance in the first system goes away.

New features and complexity is added, which in turn can make the logic cumbersome. This phenomenon is often referred to as “feature creep”.

As the existing program base is modified and scrutinized, refinements may be made to areas that don’t need them, compromising the integrity of time-tested core segments of the program.

The user interface may end up becoming cluttered and unattractive as all of the new features need to be represented, causing it to be more difficult to use.

With the first system, 80% of the features may be commonly used, whereas with the second system, that number may drop to be as low as 20%.

So while the second “improved” system may be able to do a lot more, all the effort that went into adding many new (but seldom used) features often comes at a high price.

As a result, the reaction and feedback to the second system may be less than stellar.

Take Windows 7 and Windows 8 as examples of a first and second system. Windows 7 was generally received with good reviews. Windows 8, however, was almost universally panned.

The changes were pretty drastic, and a lot of complexity was added as Microsoft attempted to re-brand programs as “apps”.

Over time this second system will mature. Updates and patches will be applied to reverse course and reach a stable and acceptable equilibrium.

An example of this is the transformation of Windows 8 into Windows 10. Now Windows 10 is considered a successful operating system. Updates moving forward beyond the second system often continue to trend in the right direction.

The only thing to watch here is that you don’t create “bloatware” – keeping all of the legacy features intact just in case someone wants to use them. Sometimes it is best to cut obsolescence out.

In summary, when starting a second effort, be mindful of doing too much. Keep in mind why the first system was successful and consider using experienced programmers who have worked through the situation before. Don’t think it can’t happen to you!

Written by Carlo Zaskorski
Controls Design Engineer and Freelance Blogger

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Evaluating your Control System Failure to Recovery Methods

Image by: Brandon Cooper

If your control system failure experiences are anything like mine, most of them haven’t occur on weekday mornings when it’s convenient for you and your team to evaluate how good your recovery procedures are.

Instead, they’re more likely to occur at inconvenient times, whether it be in the middle of the night or over the weekend.

Image by: Brandon Cooper

We often go into such events hoping for a smooth recovery that won’t take any longer than it should.

And we often leave these events with a clear understanding of how a little pro-activity and preparation can be the difference between a short outage and losing an entire shift to downtime.

That’s why it’s important to evaluate our system backups and procedures: it’s a key element in having a strong and efficient control systems group that can maintain facility operations with minimal downtime.

The simple fact is our operational teams depend on us to be able to resolve these issues as quickly and efficiently as possible.

What kind of system equipment should I maintain backups for?

In the control system world we have a little bit of everything.

This can include Programmable Controllers, Distributed Control Systems, HMI Terminals, PC Workstations and Servers (which can be hardware or virtual systems,) as well as Network Equipment including Switches, Routers and Firewalls.

But no matter what the mix of equipment is, the important question is, “do you have a recoverable backup for every intelligent device, and is every device inside your OT network accounted for?”

Manual or automatic backups

While implementing a schedule to manually backup all your individual programmable devices can be the easiest solution to initially setup, there are also automatic backup infrastructures to consider.

Many automation companies have products that automatically backup their DCS, PLC, HMI, and SCADA systems. Depending on the size of your facility and system networks, these automated backup systems can be a worthwhile investment.

And when it comes to individual servers, workstations, and virtual machines, there are some great systems with management servers that do automatic scheduling of backups.

These automated backup systems can save your team lots of time over the long run, time which can be spent on tasks that improve operational performance.

Where will I store my backups?

Network storage is a great place to start, however I don’t recommend placing all your eggs in one basket so to speak.

Data storage centers on opposite ends of a facility are a best practice in case of fire or other disasters.

Backups are great, but recovery time is what matters

All the backups in the world are great, but when the time comes the only aspect that really matters is the effectiveness of your recovery plan.

To that end, you should ask yourself if every member of you control system team has the following:

    • Access to documented procedures for recovering all system backups?
    • Understanding of where to find backups, and how to recover them?
    • Has shared knowledge from previous recovery events, so if a team member leaves other team members aren’t left in the dark?

These are questions and evaluations that we must all ask ourselves from time to time to ensure that our facility has the insurance it needs when a failure occurs.

By being proactive, we can eliminate potential lost production time due to having to conduct an emergency investigation to locate procedures, backups, configuration data, passwords, and other critical system information.

When the time comes, these failure “opportunities” will either expose our weaknesses, or substantiate our preparedness.

Either way, our response in these situations will have a huge impact on how much faith the operational team has in their control systems team.

Written by Brandon Cooper
Senior Controls Engineer and Freelance Writer

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Safety Instrumented Systems vs Basic Process Control Systems

Most industrial processes incorporate Safety Instrumented Systems (SIS) and Basic Process Control Systems (BPCS) in their operations. This article compares the features of each system.

Image by Emmanuel Okih

SIS and BPCS Compared

A Safety Instrumented Systems (SIS) function is to monitor and maintain process safety. Since they are not frequently called to operation, SISs are usually passive and dormant.

Because of this, it’s important that devices used in SISs have diagnostics to ensure components are functioning properly, which in turn reduces the frequency of manual verification.

Because of the criticality involve, it is important that changes done after installation comply strictly to the site’s Management of Change (MOC) process.

A Basic Process Control Systems (BPCS) function is to control process operations. Since they are frequently called into operation, BPCS are usually active and dynamic.

BPCSs are known by their unique response to different types of digital and analog inputs, and outputs logic functions, which thus makes most failures self-revealing.

Changes to BPCSs are very common and are required to maintain accurate process control.

Image by Emmanuel Okih

Systems Independence

Both SIS and BPCS systems have multiple layers of protection, but since BPCS systems function mainly as the “control system,” many industrial standards recommend that the SIS systems be separate from BPCS systems:

A device used to perform part of a safety instrumented function shall not be used for basic process control purposes, where a failure of that device results in a failure of the basic process control function which causes a demand on the safety instrumented function, unless an analysis has been carried out to confirm that the overall risk is acceptable.” – Excerpt from ANSI/ISA 84.00.01-2004 11.2.10

Communication Considerations

It’s always a good idea to write protect field device communication settings to avert possible risk to cyber security threats.

But this is especially true with SIS systems as a means to avert changes being made to the system’s devices that would fall outside of the specified safety requirements (as provided by ANSI/ISA 84.00.01-2004.)

Importance Of Diagnostics

In BPCS systems, communication protocols like HART and Foundation Fieldbus play an important role, but not so much in an SIS system.

SIS systems are typically more focus on device diagnostics, since those diagnostics provide information on the health and status of the safety devices in use.

That level of diagnostic information is typically not needed in most BPCS control systems.

Common Cause Failures

These can be caused by a power surge, power loss, equipment vibration, radio frequency interference or temperature fluctuation.

Common Cause Failures also include software bugs or undetected device failures, and can be common in SIF system with a high performance level.

As mentioned above, to reduce risks from these common failures it is recommended to completely separate SIS systems from the BPCS by using redundant devices when and where necessary.

Nuisance Trips

These trips occur in SISs when devices fail within a “specified probability” in a way that it results in an alarm or warning signal.

Often when these trips occur the system needs to be manually reset, however in some more advanced system actions can be taken to automatically reset trips that truly fall in the “nuisance category.

This can be implemented with the use of voting logic that compares device diagnostics and multiple data-points to determine if user intervention is required.

Conclusion

Since SISs and BPCSs are both used to automate industrial processes with the aid of an input, output and logic functions, it is important that engineers and technicians ensure recommended standards and guidelines are strictly adhered to while working on these systems to avert dangerous situations in plants.

Written by Emmanuel Okih
Automation and Control Systems Engineer and Freelance Writer
Edited by Shawn Tierney

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

The Realities of Project Management: You Can’t Always Do It Faster

(c) Can Stock Photo / michaeldb

In 1975, computer architect Fred Brooks published a series of essays on software engineering and project management called “The Mythical Man-Month”.

These cautionary tales and inevitable realities not only stand the test of time, but also can crossover to many applications.


In this first article on the subject,  I’ll talk about his essay (also called) “The Mythical Man-Month”. The basic theme of it is that the time it takes to finish a project is not linearly proportional to how much manpower that you throw at it.

In some cases, adding additional manpower can make a project take longer to complete.

The first issue is that most project managers do a poor job of estimating the time it takes to finish a project.

Experience tells us that unplanned things always happen, but there is often a lot of pressure to make project deadlines. And most project managers feel that if they did estimate the project accurately they would probably lose their jobs.

On top of that, progress is often not measured accurately or in a realistic way since partitioning the project is not always easy.

To try to expedite the progress, additional manpower will often be thrown at a project.

The reality is that manpower and time are only interchangeable commodities when a project (1) can be partitioned, and (2) the project doesn’t require that the individual participants communicate with each other.

A task such as garbage pickup at a stadium would fit the bill here: It would take one person a month to pick up all the garbage, or 30 people one day if each had their own section.

For the control system projects that we do, partitioning the work is not as easy.

Sure, different programmers can tackle different sections of the program, and they can even test their sections thoroughly to ensure they do what they are supposed to do.

The issues develop when it is time to combine all of these parts into a complete program.

A lot of programming time must be redirected into coordination meetings throughout the process. And more time is spent reworking the logic as the sections are stitched together.

Sometimes there is a sequence that must be adhered to, and time is lost waiting for someone else to finish their work. And some tasks simply won’t be able to be accelerated.

An example that everyone understands is a pregnant woman: No matter how many doctors, doulas or midwives that are thrown at the “project”, that baby isn’t coming until around nine months pass.

Simply put, projects that have a lot of debugging, testing or other post-production requirements may not be able to be completed any faster even with additional manpower.

In summary, when it comes to the effect adding manpower has to the time it takes to complete, the results can be broken down into the following four categories:


Figure 1 – Image by Carlo Zaskorski

Category 1: Time Directly Proportional To Manpower

While the thought is that “adding manpower will result in a perfect slope relationship between time and manpower,” this is rarely the case.

This can only work when a task is very basic and does not require communication between those involved, as in the previously mentioned “stadium cleaning” example.

Figure 2 – Image by Carlo Zaskorski

Category 2: Diminishing Returns

If the task requires some communication between the workers, the graph becomes logarithmic.

There is a point in these projects where adding more manpower does very little to affect how long the project will take.

Category 3: Complexity Limits Effective Team Size

Figure 3 – Image by Carlo Zaskorski

If the task requires a lot of communication and is complex, there is an optimal amount of manpower that should not be exceeded.

When it is, adding more manpower will actually cause the project to take longer.

 

Category 4: More Manpower Won’t Accelerate All Tasks

Figure 4 – Image by Carlo Zaskorski

Some tasks are simply not able to be partitioned.

In these cases, the amount of manpower added will never significantly affect the time.

 


Written by Carlo Zaskorski
Controls Design Engineer and Freelance Blogger

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Insider News: Prepping for Automation Fair


Note: Insider News articles & videos cover behind the scenes topics at The Automation Blog, Podcast & Show. Starting in 2021 they’re now posted at http:// TheAutomationBlog.com/join

Update 11-15-19: Got some bitter sweet news this week. Some real estate I’ve been trying to sell for years has found a buyer! That said, the downside is this transaction is going to prevent me from making it out to Chicago next week 🙁 But if Rockwell makes the presentations public again I will plan on covering those here.


Let me start by wishing everyone a very happy November, and thank you all for your continued support and patronage!

As we discussed last month, one of our new “instant rewards” for patrons is access to exclusive Insider News posts where I share what’s going on behind the scenes here at Insights In Automation.

A few weeks ago I made the decision that it was time to start attending industrial automation trade shows again as the founder and editor of TheAutomationBlog.com.

For my first show I choose Rockwell’s Automation Fair, which is being held on November 20–21, in Chicago, Illinois.

Once that decision was made, I registered online and began signing up for sessions, and my final list looks something like this:

Wednesday:
T71 – PlantPAx System: What’s New and What’s Next
T86 – Designing Machine-level HMI with Studio 5000 View Designer Demonstration
T87 – Extend Visibility and Handling of Alarms: New Mobile Smartphone Application
T54 – What’s New in Connected Components Workbench Software
L24 – FactoryTalk View Site Edition Application: What’s New

Thursday:
L25 – Designing Machine-level HMI with PanelView 5000 and View Designer
L12 – Studio 5000 Logix Designer: Advanced Hands-on Experience
T97 – FLEX 5000™ I/O: Flexible, high performance, multi-discipline I/O
T91 – Visualization: What’s New and What’s Next

Hopefully I’ll learn enough new information to write several blogs about what’s new with Rockwell like I did back in 2016.

As far as travel arrangements, I decided to drive out and back, opting for thirteen hours in the car over the expense of airfare and rental cars.

Honestly, it wasn’t just the travel expenses that convinced me to drive – I’m also not a big fan of airport security, and cramped airline seating.

That said, now that I’ve decided to drive I also need to spend some time searching Audible.com for some good new books to listen to on the ride since thirteen hours is quite a long time to sit behind the wheel (if you have any suggestions I’d love to hear them!)

I also wanted to put it out there that if there are any products you’d like me to check out or ask about while I’m at the fair, please don’t hesitate to let me know.

And if there are any other trade shows you think I should attend in the coming year, I’d love to hear about them!

Until next time, Peace ✌️ 

If you enjoy this episode please give it a Like, and consider Sharing as this is the best way for us to find new guests to come on the show.

Shawn M Tierney
Technology Enthusiast & Content Creator

Eliminate commercials and gain access to my weekly full length hands-on, news, and Q&A sessions by becoming a member at The Automation Blog or on YouTube. You'll also find all of my affordable PLC, HMI, and SCADA courses at TheAutomationSchool.com.

Alicia Lomas

Migrate / Convert – Leaving Your Legacy Control Equipment

Images by Brandon Cooper

Some will argue that there’s no reason to remove part of a control system that today is running as good as it has for the last twenty years (or longer.)

Image by Brandon Cooper

Especially when that legacy hardware has been as durable as any tank used during Desert Storm, and runs as well as an 80’s Oldsmobile Cutlass in mint condition.

But even though parting with your legacy system could be as hard a letting go of your favorite pair of blue jeans, it could be time to do just that.

When you can no longer find replacement parts, or the cost of replacement parts is higher than the cost of a new control system, it’s time to consider moving on.

Image by Brandon Cooper

After moving through the stages of grief that often follows a decision to replace old but reliable equipment, the next step is to map out a migration path.

Creating a solid plan will take time and preparation, and below I’ll share some points you should consider while planning your path forward.

What was the existing method of communication to the HMI/SCADA system, and how will it change with the new system?

The legacy system’s communication protocol might have been Token Ring, Data Highway, Modbus, or some other legacy protocol that’s not likely the default means of communications of the replacement system.

In fact, most current Control Systems have Ethernet based communications, like Ethernet/IP, PROFINET, and CC-Link IE.

But while the cabling for legacy networks like Data Highway Plus can be daisy chained up to 10,000ft without additional hardware, modern CAT5E and CAT6 Ethernet network cabling is limited to runs of roughly 328ft in any one direction.

So depending on the distance to each drop, multiple Ethernet switches or Fiber Optic cabling and transceivers may be needed to replace long runs of legacy communication cabling.

Will the legacy system that you are replacing still need to communicate with other legacy systems that will stay in operation for some amount of time?

Image by Brandon Cooper

When using a phased migration approach, it’s not uncommon for the new control system to need to continue to communicate with other legacy systems that will stay in operation for some time.

To address this, you may need to install one or more “Legacy Network Gateways,” like EtherNet to Serial Converters, Ethernet to Legacy Network Bridges, or some other type of gateway depending on the protocols of the old and new systems.

Either way, if all the “pieces” of the control system are not moving forward at the same time, intermediate steps need to be implemented so the new system can communicate to the existing systems.

When installing my new I/O modules, should I re-wire or use conversion kits?

Image by Brandon Cooper

For many legacy upgrades, you’ll have the choice to rewire from old to new I/O modules with a “fan-out” type of wiring lead back to terminal strips, or you can purchase conversion kits that enable you to bypass the re-landing of hundreds (or even thousands) of wires.

These conversion kits typically use more cabinet space and often leave the existing chassis in place, but they also dramatically decrease the amount of time it takes to cut-over to the new system.

And while they add more to the material costs, the overall project cost is usually offset by the labor cost that is saved.

In the end the decision to use conversion kits usually comes to a matter of personal preference, or cut-over time constraints.

Moving Forward By Leaving Your Legacy Behind

As the legacy control equipment is powered down for the last time, all the planning and considerations you have taken in prior weeks should ease the pain of knowing that the “old reliable” system will not see operation again.

So wipe that teardrop from your cheek and embrace the new opportunities, (and frustrations and nuances) that you’ll gain by installing the new system.

Hopefully, it will help you sleep at night for the next ten to twenty years.

And at some point in the future, you’ll be able to talk about “the good old days” of that legacy equipment you miss so much.

Written by Brandon Cooper
Senior Controls Engineer and Freelance Writer

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

DH-485, RSLinx – Communications Hardware and Driver Setup (S25)


In this week’s episode of The Automation Show, I cover the different hardware used to connect a PC to a DH-485 network, as well as how to configure RSLinx Classic Drivers for each:

For more information, check out the “Show Notes” located below the video.




The Automation Show, Episode 25 Show Notes:

Support our site and get early access to our shows and podcasts!

You can now support our site with a small monthly pledge and in turn receive instant rewards! To find out more visit https:// TheAutomationBlog.com/join.

You can also purchase the entire season of The Automation Show for a one time donation of $25 at https://vimeo.com/ondemand/theautomationshow.

Thanks in advance for your support!

Vendors: Would you like your product featured on the Show and Blog? If you would, please contact me at: https://theautomationblog.com/contact

Until next time, Peace ✌️ 

If you enjoy this episode please give it a Like, and consider Sharing as this is the best way for us to find new guests to come on the show.

Shawn M Tierney
Technology Enthusiast & Content Creator

Eliminate commercials and gain access to my weekly full length hands-on, news, and Q&A sessions by becoming a member at The Automation Blog or on YouTube. You'll also find all of my affordable PLC, HMI, and SCADA courses at TheAutomationSchool.com.

Alicia Lomas

Building and Maintaining a Sustainable Team

Having worked in multiple facilities, I’ve seen first hand how different automation teams operate under different philosophies and working conditions.

In evaluation of the control and automation groups I have worked in or around, I’ve found there have been two main philosophies, each with opposite end approaches.

Philosophy A: Smallest Team Possible, On Call 24/7/365

Facilities that fall into this first category have the goal of maintaining the smallest automation team possible.

And the control engineers they do employ are on call around the clock, week in and week out, even though in most cases they are never compensated for the countless hours of overtime they end up working.

Then, after the facility is done burning out their engineer, they help them find a way out and repeat the process with someone else.

Philosophy B: Sustainable Team Focused On Emergencies, Support, and Upgrades

Facilities that fall into this second category strive to maintain a sustainable engineering team for the benefit of both the company and their engineering team.

They build and size a team that cannot only handle the times when the facility is having difficulties, but one that can also provide “customer service” to the facility operations group.

And when operations are running smoothly, their team focuses on manufacturing improvements that reduce overall costs and improve plant efficiencies, with the resulting saving more than offsetting the expense of having the engineer team in the first place.

Building and Maintaining a Sustainable Automation Team

In addition to the number of control engineers that is needed to maintain a facility with balance for both the facility and the automation team members, there are some other key components that can make a team sustainable (or lack sustainability.)

Documentation:

The strongest control teams have documented “to the letter” all system assets.

This documentation consists of network drawings, fiber and connectivity drawings, passwords and login information, as well as a master asset list with all operating system and firmware revision levels.

Procedures:

From restarting the facility after a power loss, to where and how to accomplish system backups, to all other system procedures, sustainable control teams have procedures for everything that they touch.

Training:

If you are not moving forward, you are moving backward in this fast paced, ever changing technological field of work.

Training IS necessary for any team to stay up to date with the newest technologies.

Workload Balance:

Balancing the workload across team members is a must.

A real team helps each other with work/life balance, as well as helps each other across the different stages of life.

One week I may need someone to take a “call” in my place so I can attend my son’s playoff game, and the next week I may need to cover for a team member who’s helping a sick family member. The best teams take care of each other.

Balance of skill sets:

Everyone on a team will bring a different skill set to the group.

One person may have strength in networking, while another is proficient in PLC programming, while yet another has extensive experience with database and big data manipulation.

These are all important skills, and when everyone brings such experiences to the group great tasks can be accomplished and maintained.

Leadership:

Last but not least, it takes a good leader to bring all this together.

The group leader needs to be willing to proactively monitor the group’s needs and progress, while also making sure that operations needs are met.

The leader needs to monitor the group for load distribution, training, balance of skills, and other criteria to insure the group stable for the long term.

When a team is managed in such a way, when a member of team does leave, the group and facility are not immediately in distress due to the loss of a single engineer.

While the loss is still felt, it doesn’t result in the need to completely rebuild the group, nor does it result in the loss of knowledge needed for the facility to continue to move forward.

The Results Of Each Approach

Working as part of a minimalist team for a company that falls into category “A” can really be a nightmare for the engineer, since they’ll often suffer from a lack of work/life balance.

And while there is some truth to the old saying, “any work is better then no work,” most people you’ll find working in engineering teams at category “A” facilities won’t think twice about leaving to join the engineering team in a company that falls into category “B.”

“You don’t earn the respect of your employees by disrespecting them – SMT”

That’s because working as part of a sustainable control and automation engineering team can be one of the most rewarding, innovative, and intellectually stimulating jobs obtainable.

With the right team members in the right environment, there’s truly no limit to what can be accomplished by a sustainable team working together.

Written by Brandon Cooper
Senior Controls Engineer and Freelance Writer

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Getting A-B PLC Data Into A Raspberry Pi

In my last article I discussed why you might want connect your PLC to a IoT device like a Raspberry Pi, as well as the software we use to accomplish this connection with A-B PLCs (Node-Red with the free node-contrib-pccc driver.)

In today’s article I’ll walk you through configuring node-contrib-pccc to read data from your A-B PLC into a Raspberry Pi.

Part 1: Network Connections

In this example I have a A-B PLC (MicroLogix 1400) connected to my WiFi Router via a standard LAN cable. I also have my Raspberry Pi and laptop computer connected to the same WiFi router via wireless connection.

Image by Nilesh Soni

While you can access your Raspberry Pi directly by connecting a USB keyboard, mouse, and HMDI display (TV, Monitor, etc.) to it, I’ve found that doing so consumes a lot of processing power in the Raspberry Pi which substantially slows down the system.

To avoid this, I prefer to connect to my Raspberry Pi remotely over the same local network using my laptop computer and a VNC client application like VNC Viewer.

Part 2: Node-Red PCCC Configuration

Once you’ve connected to your Raspberry Pi, start your Node-Red server and open the PCCC Input Node.

Next, navigate to the Connections section enter your AB PLC’s IP Address and Port Number, and set Cycle Time and Time Out per your requirements.

Note: In my experience, A-B PLCs generally use Port 44818

Image by Nilesh Soni

Next we need to configure the PLC variables that we want to fetch data from. To do this, you’ll need to know where in your PLC’s memory this data is.

With a legacy A-B PLC, it will be an “Address,” while with newer A-B PACs it will be a “Tag.”

In our example we are using a MicroLogix 1400, so we’ll need to provide the PLC Addresses for each piece of data we want to fetch.

For testing purposes we’ll use N7 integer values which we can change manually in the PLC using RSLogix, and then check within Node-Red to see if we are getting the same values or not.

Image by Nilesh Soni

Part 3: Testing Communications

Now that we have completed our configuration, we can connect our Input Node to the Output Debug Node to see the incoming data from PLC.

To do this, drag the Debug Node from the Output Nodes and drop it into your work space,  then connect (wire) the Input Node to the Output Debug Node:

Next, click on the Deploy Button (on the top right corner of the window) to see the incoming data from PLC in the Debug Area:

Next Step: Data Processing

Now that we have our data coming to Raspberry Pi’s Node-Red server, there are basically three kinds of operations we perform with data:

  1. Store data in a database
  2. Compare the data with reference values to provide a result or trigger an operations
  3. Display the data to the client in some visual form

Although the Node-Red server on the Raspberry Pi has several nodes to Store, Process, and Display data, there are many reason why you may not want to use a Raspberry Pi computer to do this, including:

  • Raspberry Pi computers have limited processing capabilities and ram
  • Multiple Raspberry Pis may be needed to connect to all the PLCs in the facility, but we often need all the data for calculations and displays.
  • Remote locations may need access to historical data and complicated screens, which is more bandwidth intensive than reading raw values at set intervals.

Because of these reasons it’s more comment to use the Raspberry Pi computers to fetch the data we need from our PLCs, and then using MQTT (uses minimum bandwidth to transmit data, even works over 2G connections) to transmit the raw data to a server to be stored and processed.

Conclusion:

In this article we learned how to setup our Node-Red PCCC configuration running on our Raspberry Pi computer to read data in from an A-B PLC.

We also covered why you typically won’t use a Raspberry Pi computer to process, store, and display that data, but instead send that data to a Server Computer (like AWS IOT) for processing.

In my next article I’ll cover MQTT, and how to use it to Transmit data from a Raspberry Pi computer to an AWS IOT system.

Written by Nilesh Soni
Provider of custom ERP solutions and Freelance Writer
Edited by Shawn Tierney

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

HMI and SCADA Alarms with Meaning

Image by Brandon Cooper

Picture this: You enter a control room and immediately suffer sensory overload as a result from an overwhelming number of audible alarms and flashing lights.

As you take in the situation, you notice that your operators are frantically clicking every acknowledge button in sight in an attempt to silence the cascade of “nuisance” alarms.

As you take in the scene, the call that prompted your visit the control room begins to make more sense.

It was a call from your plant manager asking you, his control system engineer, why none of the operators responded to the repeated alarms that occurred overnight?

His questions included, “did we get a specific alarm?” and “was the alarm ever acknowledged?”

Now if you’ve never experienced a situation like this, rest assured it’s a common occurrence in many facilities theses days, resulting from an overwhelming number of control system alarms.

For those of you who experience similar situations in your control rooms, I’d suggest to you that it’s time to take a look at your Alarm Summary, a powerful tool when used correctly.

Example: The following alarm summary contains 26 acknowledged alarms. To be proactive, it’s time for a cleanup before complacency sets in.

Image by Brandon Cooper

I’ve personally inherited systems with over two hundred active alarms in the alarm summary, and alarm logs that will not go back further than twelve hours due to the overwhelming amount of alarms being generated.

If you find yourself in the same situation, the forthcoming task to bring your alarms under control will take much coordination and communication between operations and engineering.

But the upside of such an effort is that when your alarms have been streamlined, it can create an environment of operator interaction that improves the overall operations of your facility, reduces downtime, and allows faster and more accurate decision making.

Getting Started With Alarm Cleanup

Make no mistake about it, cleaning up your alarm system will likely mean re-evaluating every single alarm in your system.

It will also require operations and engineering to work together to review and update the parameters of each alarm based on current conditions in your plant, including:

  • Is the alarm needed at all?
  • What is the priority of the alarm?
  • Should the alarm be audible?
  • What reaction should the operators have if they get this alarm?

Many additional parameters can also be evaluated, but the above list is a good place to start.

All alarms that are determined to be irrelevant or obsolete can and should be removed from the system, while other alarms may only need their alarm limits (or other parameters) updated.

And while this process may take days or weeks, as a result your operators will have a manageable alarm summary with at most five or ten active alarms.

This in turn will allow them to notice new alarms as they come in, and will result in each new alarm getting more attention as it becomes active.

But also keep in mind that, depending on your facility and control system size, an alarm re-evaluation will likely need to take place once every six months to a year.

As an average control system experiences several alarms per month, over some time the summary will start to become cluttered with disinformation again.

So I’d recommend adding this as a preventative maintenance task in your control group, since the positive results definitely make it a worthwhile investment of time.

As control engineers we have the power to create an atmosphere in our control rooms where SCADA and HMI Alarms are important.

We can do this by keeping alarming systems free of clutter, which in turn promotes an atmosphere where when an alarm occurs we react to it instead of ignoring it, addressing and/or repairing the failure or issue.

By doing so, the probability of an important alarm drowning in a sea of irrelevance becomes much less.

And with the smoke and mirrors out of the way, the operations team will notice important alarms, and appropriate corrective action will be taken in a timely and efficient manner.

Written by Brandon Cooper
Senior Controls Engineer and Freelance Writer

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

UDDT, AOI – Using Effectively To Document Programs

In yesterday’s article I covered the advantage of using verbose tag names in place of pages of comments to document programmable controller programs.

Beyond that, there are some other tools available that can help you with organization.

Focusing on Rockwell Automation Studio 5000 software, both User-Defined Data Types (UDDTs) as well as Add-On Instructions (AOIs) should be used whenever the application allows for it.

Both are derived from the principles of Object-Oriented Programming (OOP), which simply means that you can take something that you use often and make it into an “object” similar to how an XIO or an OTE is an object. Then you can just drag and drop into the program and use it repeatedly.

UDDTs allow you to group together variable types into a structure.

For example, if you created a UDDT called “Motor_Start_Stop_Station”, it would contain four internal variables called “Start_Pushbutton”, “Stop_Pushbutton”, “Motor_Overload” and “Motor_Starter” (see Figure 3).

Figure 3. Creating UDDT Structure – Image by Carlo Zaskorski

Then to use this UDDT, you simply create a new variable of the type “Motor Start_Stop_Station” and it will have the same addressable sub-element as the UDDT it’s base on, as show in the examples below:

Figure 4. Creating UDDT Variable – Image by Carlo Zaskorski
Figure 5. Using UDDT Variable in Logic – Image by Carlo Zaskorski

AOIs allow you to create logic modules that you can drop into your program as needed.

This is great when a certain bit of logic is repeated multiple times in a program.

Another benefit of this is that the global logic can be changed in one place if a modification must be made and it will apply to all the instances of usage.

Creating an AOI is similar to creating a UDDT in that you must define internal variables that will be linked to, and define them as input or output variables as well as whether they are required or optional:

Figure 6. Defining AOI variable names and properties – Image by Carlo Zaskorski

You must then define the internal logic:

Figure 7. Example of logic defined inside of an AOI – Image by Carlo Zaskorski

After that, you have to create a variable for the AOI as it is an object, which will be used as the AOI name when it is dragged into a routine to be used in the program.

The AOI can then be used as any other object or instruction (XIC, XIO, CPT, etc.). In the Figure 8 example, it is combined with a UDDT that is providing the linked variable names.

Figure 8. Ladder routine with an AOI that also uses a UDDT – Image by Carlo Zaskorski

And while in my above example I use a simple motor start stop station, AOIs can also be used with more complicated code as well as custom application logic.

At the end of the day, taking advantage of features such as UDDTs and AOIs can help streamline the program and make it easier to implement changes to commonly used logic and data.

And perhaps after reading these two articles you’ll come to the same conclusion that I did: That comments had their place when the programming software was more basic, but that new versions have so many more features (and so many fewer limitations) that the better way to program today is to maximize your program’s organization to enable you to limit or eliminate the need to comments to it.

Written by Carlo Zaskorski
Controls Engineer, Product Manager, and Freelance Blogger
Edited by Shawn Tierney

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Tag Names – Eliminating The Need For Program Comments

There are a couple of different ways of thinking when it comes to in-program documentation, particularly the use of comments.

One main methodology I see is to use documentation liberally (since it’s free), placing comments on as many rungs as desired.

The other methodology is that less is more, with more emphasis placed on variable tags and organization.

There is no doubt that there are times when a well-written comment can be very beneficial, particularly on the older programming platforms where the variables may have naming limitations.

In some cases the work could be for nothing when the variable tags and comments do not get saved in the processor.

My opinion is that a well-organized program with useful and verbose variable tags (aliases) can eliminate the need for comments altogether.

Using Verbose Comments To Document Hard To Follow Code

First, I’ll explain the issue that I have with comments. Fundamentally, comments are most often used to explain logic that is hard to follow.

This can lead to logic that could have been factored better to be easier to understand.

Sometimes it is best to write logic that is longer overall if it has a better flow and is easier to read.

Imaging you are reading an old undocumented program that only contains variable addresses.

If there is a good flow to the logic, you can often understand it. If there isn’t it can be very difficult, even with comments.

Sometimes because every rung is commented, there can be comments on basic logic that is easy to understand and doesn’t need any. The thought is to be consistent, which is generally a great thing, but it can make the program appear cumbersome.

Imagine you do have a program as described in the above paragraph. It is generally well-written and every rung has a comment regardless of what is going on.

When you go to commission the program, you will inevitably find some things that you have to correct.

Since there are so many comments, you not only have to get down and fix the logic, but now it is expected of you to also maintain all of these comments. This could mean adding new comments as well.

You will have to check the rest of the program logic and comments to make sure they still remain valid after the changes. This adds to the overhead of maintaining the program.

Using Descriptive Tag Names With Logic That’s Easy To Understand

Deciding to forego comments is a methodology that you have to adopt early in the program design, as choosing descriptive yet concise variable tags is a key part of the process.

For example, in a standard start/stop program for a motor starter, you may have labeled the inputs as “START_PB”, “STOP_PB” and “MOTOR_OL”. The output could be labeled as “MOTOR_STARTER”.

The comment (may not exist due to the simplicity of the logic) could have read “START/STOP MOTOR STARTER LATCH”.

Sidebar: Studies have shown that people read mixed case (capitalizing only the first letters) words with better understanding, so consider replacing a text such as “MOTOR_STARTER” with “Motor_Starter”. The studies were primarily regarding the readability of street signs, but it is something to consider.

With most modern programming environments, the variable tags can be quite long, so a more descriptive and comment-like tag can be used such as “Start_Motor_Pushbutton”.

Choosing proper variable tags is the first part of making this work, but the logic must also be written in the most readable way possible.

This may not always be the most efficient way, and if there is a lot of nested logic in a procedure it may be better to segment it into several rungs for readability.

The Figures 1 and 2 below show how this may be done with a single rung and comments compared to using four rungs with verbose variable tags and no comments.

Figure 1. With Comments, One Rung – Image by Carlo Zaskorski
Figure 2. No Comments, Separate Rungs- Image by Carlo Zaskorski

In the above example, each rung in figure 2 is clearer and so basic that comments are not needed.

Programming in this way can also make future modifications easier as there won’t be a need to maintain comments, and the logic is segmented so it can be easier to add new inputs to the proper areas.

It’s also important to keep in mind that there is no practical limit on the number of internal variables you can use, so adding additional internal tags or coils is not going to affect the program in any noticeable way.

You don’t have to take the premise of eliminating comments literally. The point of this is to promote a different way of looking at program organization.

You will certainly find that by using better variable names and logic flow will make your programs more readable and understandable to a larger group.

And even more efficiencies can be achieved by creating and using your own data types and instructions, which I cover that in tomorrow’s article.

Written by Carlo Zaskorski
Controls Engineer, Product Manager, and Freelance Blogger
Edited by Shawn Tierney

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas
 

Common Causes of PLC Failures, and the Solutions

Image by Emmanuel Okih

In today’s article, I cover some of the most common causes of PLC failures, as well as solutions to avoid them.

Input/Output Modules and Field Devices

Image by Emmanuel Okih

I/O failures can be caused by:

  • Error in the PLC configuration
  • Slacked (loose) terminal blocks
  • Damaged wires
  • Incompatible modules (old vs new models)
  • Faulty Intrinsically Safe (I.S.) barriers
  • Faulty Field Terminal Assemblies (FTA)

Typically any of these problems will prevent the PLC from functioning properly, as they either interrupt the PLC program execution or stop it abruptly.

To resolve these, use the system diagnostics to determine the root cause of the problem and after which, carry out a total system check to determine the extent of the problem.

If the fault doesn’t originate at the PLC, inspect the field devices wired into them. Problems with field devices are usually due to damaged circuitry caused by exposure to adverse conditions; moisture, vibration, heat, electromagnetic interference, chemicals, etc.

Power Outages

PLC failures can often be caused by frequency interference and unplanned power outages.

These can result in the backup of the PLC program failing, as well as the scrambling of memory that renders the PLC program unreadable by its central processing unit.

Solutions to consider to protect against these failures include:

  • Ensure PLC programs are backed up regularly
  • Change PLC batteries that back-up Volatile Memory routinely
  • Insure backups are stored safely, preferably on a redundant storage solution kept in a dry cool location free from any form of Electromagnetic influence.

Power Supply and Earth Integrity

Power failures obviously disrupt proper functionality of a PLC, and are typically caused by overloaded or worn power cables, slack connections, grid failure, faulty power supply modules, etc.

Consequences of power failure to a PC include:

  • System damage due to electrical shocks received by system components
  • Burn-out components due to power surges
  • Loss of process data due to power surges

Power failures can be avoided using:

  • Backup power source to insure a constant flow of power to the PLC
  • Uninterrupted power supply or redundant power source

Earth integrity failures are typically due to damage ground wires and slack connections. It should be noted that earth failures also present an unsafe condition for maintenance crew.

For proper earth integrity, always look out for any damage to wires and slack connections by testing the wiring with a multimeter to ensure the PC Earth terminals is secured to the connecting point of the equipment.

Network and Communication Issues

Network and Communication between PLCs, peripheral devices and Human Machine Interfaces, DCS and SCADA systems is typically established via wired communications cables.

And when Network and Communication fail, it usually prevents the connected devices from carrying out their intended functions.

Causes of Network and Communication failure can include:

  • Mis-configuration of devices when installed or replaced
  • Obsolete equipment which doesn’t support newer devices
  • Incompatible changes to network settings
  • Hardware failures of network equipment
  • Power supply failure of network equipment

Network and Communication failure are prevented by:

  • Regularly checking to ensure connection points are solid
  • Ensuring Firmware is up to date and regularly install security patches

Overheating

When the manufacturer’s approved distance and maximum temperature threshold of equipment installed with PLC hardware is not maintained, PLCs and/or peripheral parts could malfunction as result of overheating.

To avert this, be sure to follow manufacturer’s recommended distance for equipment around the PLC, especially when adding new equipment into existing panels.

Conclusion

The best way to avoid self inflicted PLC issues and failures is to be sure you and your team are following the manufacturer’s installation and maintenance procedures.

This includes insuring that your PLC has adequate cooling, as well as reliable and appropriately protected power supply. That new I/O modules and field devices are wired and grounded correctly, and new communication cables are routed according to their specs. And that any changes to network setting take into account all of the existing devices on the network, old and new alike.

If these steps are taken, along with regular PLC program backups and replacement of PLC batteries within their rated lifespans, then most if not all of the above PLC failures can avoided.

Written by Emmanuel Okih
Automation and Control Systems Engineer and Freelance Writer
Edited by Shawn Tierney

Have a question? Join our community of pros to take part in the discussion! You'll also find all of our automation courses at TheAutomationSchool.com.

Sponsor and Advertise: Get your product or service in front of our 75K followers while also supporting independent automation journalism by sponsoring or advertising with us! Learn more in our Media Guide here, or contact us using this form.

Alicia Lomas