Posted by: Christian Verstraete | May 27, 2010

Integrating the ecosystem in PLM

Discussions with clients and account teams often trigger thoughts. I’m trying to share those with you, looking for your feedback and comments. Here is another.

Product Lifecycle Management is slowly but surely gaining traction in discrete manufacturing, and is becoming the backbone of the product information in the company. As such it complements the ERP environment that contains the order information. PLM is supposed to track the product from initial conception to end of life, ERP the order from receipt to delivery 9and payment). The fact the two concepts have evolved separately does not make the interface any easier, but that’s not the topic I want to discuss today.

PLM is being implemented in many companies today, although in most situations it has more of a PDM (Product Data Management) flavour.

As companies increase the outsourcing of manufacturing, soon to be followed by the outsourcing of key design tasks, looking at expanding the reach of PLM beyond the boundaries of the company makes sense. But a number of issues need to be addressed:

  • Ownership of data, IP protection and the presence of potential competitors in the ecosystem require a segmentation of the data to ensure the confidentiality of the information entered by each of the participants

  • Geographical location of the players may result in having to prove the compliance with specific regulations such as ITAR.

  • Network latency influences information update, which may result in a system that is not easily accepted by some of the users.

    So, to make a PLM environment truly usable across an ecosystem, a couple elements need to be taken into account. Data needs to be segregated ensuring each of the participants only has access to information he/she truly requires and is allowed to access. Data items may have to be duplicated to ensure appropriate latency, but in that case consistency is critical.

    There are basically three approaches to achieve this, and I happen to know companies having taken all three, so there does not seem to be a right or wrong answer.

    1. Build a large, integrated PLM environment that is accessible both from within the enterprise and by partners. Through access rights, each of the users has access to the information he/she is allowed to access. From a data management/data integrity and compliance perspective, this is the easiest approach, however, network latency might be an issue for partners located far away.

    2. Build and deploy small PLM appliances that are located at each of the sites and synchronize the relevant data between the systems addresses the bandwidth issue, but on the other hand, may result in partners working with outdated information on some occasions. Here the integration of that PLM appliance with the companies PD&E environment becomes critical. Although there is a master/slave relationship between the main PLM environment and the appliances, the compliance issue needs to be looked at very carefully as some information is not allowed to be located in specific geographies.

    3. Using a cloud based approach where the PLM information is updated from the PLM master and allowing, with appropriate security levels, the partners to address that environment could be a reasonable compromise. The network latency issue may be present, depending on how the cloud is structured.

    The main issue is, however, not the data aspect, but the business processes and how those are executed across the ecosystem. But that will be covered in a next post.

    Digg This


    1. Hello Christian,

      I couldn’t agree more with what you wrote here. When I was with HP in Germany leading the PLC Practice – this is now 7 years ago – we already used the term Product Lifecycle Collaboration (PLC) to indicate that true PLM means inter-company and intra-company product data exchange.

      With our new upcoming product, the ProcessIntegrationBridge, Consequor Consulting AG will build on the principles you outlined in points 2 and 3 above – using a data integrity methodology for ensuring consistent remote updates of product information.

      The “outdated data” challenge is overcome using fast, simple “dirty flag” distribution, indicating that a data set has been changed. This is just one example for the base technology used: Events and PubSub mechanism. Not really rocket science, but a proven technology used in many mission critical applications. How those events are handled is all based on what we call “Adaptive Applications” – rule-based, configurable, agent-driven small apps that represent a certain functionality like “Make a baseline”, “prepare a report”, etc.

      As those Adaptive Applications are small they may be extended or adapted for new functions either based on extending their rule set or by using them as a basis for new Adaptive Apps.

      The execution environment will be cloud-based – we are currently investigating available alternatives, ranging from Amazon’s cloud services to GLOBUS to “roll our own”.

      A fundamental component of this whole PLM Cloud Foundation will be, like the one Viacore Inc used to use, that Consequor Consulting will provide advanced business-level monitoring services that ensure PLM Process Level Agreements (similar to IT SLAs) are being kept between business partners.

      … But that is another story to be told in another post…

      Thank you, Christian, for opening that fastincating ideas in your blog. I am sure we will hear a lot more in this direction.

      Frank Goenninger
      Consequor Consulting AG

    Leave a Reply

    Fill in your details below or click an icon to log in: Logo

    You are commenting using your account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s


    %d bloggers like this: