Fig 3 - uploaded by Yassine Qamsane
Content may be subject to copyright.
Rules for the DC macro-states transformation. 

Rules for the DC macro-states transformation. 

Source publication
Conference Paper
Full-text available
In the domain of Automated Manufacturing Systems (AMS), Supervisory Control Theory (SCT) is a general model-based framework, which contributes to the automated development of control systems. Although SCT delivers important theoretical insights to deal with controllers design, its use in the industry is still weak, because of the discrepancy among...

Similar publications

Conference Paper
Full-text available
In the domain of Automated Manufacturing Systems (AMS), Supervisory Control Theory (SCT) is a general model-based framework, which contributes to the automated development of control systems. Although, SCT delivers important theoretical insights to deal with controllers design, its use in the industry is still weak, because of the discrepancy among...

Citations

... From the composition = , one obtains an FSM that models the closed-loop system behavior, i.e., the system behavior under the control of . For the previous example of the transmission system in Fig 2, for instance, = has 8 states and 14 transitions, and it is displayed in Model can be converted into implementable hardware language, for practical use [23]. ...
Preprint
Full-text available
Industry 4.0 systems have a high demand for optimization in their tasks, whether to minimize cost, maximize production, or even synchronize their actuators to finish or speed up the manufacture of a product. Those challenges make industrial environments a suitable scenario to apply all modern reinforcement learning (RL) concepts. The main difficulty, however, is the lack of that industrial environments. In this way, this work presents the concept and the implementation of a tool that allows us to convert any dynamic system modeled as an FSM to the open-source Gym wrapper. After that, it is possible to employ any RL methods to optimize any desired task. In the first tests of the proposed tool, we show traditional Q-learning and Deep Q-learning methods running over two simple environments.