Monday, January 31, 2022

Articles on JaamSim

The last few years have seen an increase in the number of scholarly articles regarding JaamSim. One article in particular caught our eye. In its review of open source simulation software for production and logistics, Lang et al. (2021) concluded that:

"JaamSim provides everything which is necessary to model typical planning tasks in production and logistics and proves as a real alternative to commercial discrete-event simulation tools."

Of course, this observation was not news to us, but it was nice to see it recognized independently!

In an earlier article that reviewed a large number of open source simulation packages, Dagkakis and Heavey (2015) noted that:

"Out of all the OS DES projects we reviewed, JaamSim is the one with the most impressive 3d user interface that can compete against COTS [commercial off the shelf] DES software."

"The fact that a non-expert user can just download and test the software in a few minutes is something that is a scarce attribute in OS projects and especially in the DES domain."

"It is the only tool we found that is clearly industry driven."

These two articles and many more that are relevant to JaamSim are listed below:

Lang, S.; Reggelin, T.; Müller, M.; Nahhas, A. (2021): Open-source discrete-event simulation software for applications in production and logistics: An alternative to commercial tools?, Procedia Computer Science, Vol. 180, 978-987, doi:10.1016/j.procs.2021.01.349

Possik, J.; Zouggar-Amrani, A.; Vallespir, B.; Zacharewicz, G. (2021): Lean techniques impact evaluation methodology based on a co-simulation framework for manufacturing systems, International Journal of Computer Integrated Manufacturing, doi:10.1080/0951192X.2021.1972468

Gorecki, S.; Possik, J.; Zacharewicz, G.; Ducq, Y.; Perry, N. (2021): Business models for distributed-simulation orchestration and risk management, Information, Vol. 12, No. 71, doi:10.3390/info12020071

Ochs, J.; Biermann, F.; Piotrowski, T.; Erkens, F.; Nießing, B.; Herbst, L.; König, N.; Schmitt, R.H. (2021): Fully automated cultivation of adipose-derived stem cells in the StemCellDiscovery—a robotic laboratory for small-scale, high-throughput cell production including deep learning-based confluence estimation, Processes, Vol. 9, 575, doi:10.3390/pr9040575

Amarantou,V.; Chatzoudes, D.; Angelidis, V.; Xanthopoulos, A.; Chatzoglou, P. (2021): Improving the operations of an emergency department (ED) using a combined approach of simulation and analytical hierarchical process (AHP), Journal of Simulation, doi:10.1080/17477778.2021.1981784

Xanthopoulos, A. S.; Koulouriotis, D. E. (2021): A comparative study of different pull control strategies in multi-product manufacturing systems using discrete event simulation, Advances in Production Engineering & Management, Vol. 16, No. 4, 473-484, doi:10.14743/apem2021.4.414

Kumar, A.; Sharma, K.; Singh, H.; Naugriya, S. G. ; Gill, S. S.; Buyya, R. (2021): A drone-based networked system and methods for combating coronavirus disease (COVID-19) pandemic, Future Generation Computer Systems, Vol. 115, 1-19, doi:10.1016/j.future.2020.08.046

Kumar, A.; Krishnamurthi, R.; Nayyar, A.; Luhach, A. K.; Khan, M. S.; Singh, A. (2021): A novel software-defined drone network (SDDN)-based collision avoidance strategies for on-road traffic monitoring and management, Vehicular Communications, Vol. 28, doi:10.1016/j.vehcom.2020.100313

Izquierdo, F.; Garcia, E.; Cortez, B.; Escobar, L. (2021): Flexible manufacturing systems optimization with meta-heuristic algorithm using open source software, Recent Advances in Electrical Engineering, Electronics and Energy, Vol. 763, doi:10.1007/978-3-030-72212-8_18

Larsson, R. (2021), Development and application of a tool for assessing the impact of failure modes on performance of underground drill rigs, Dissertation, Örebro University, Sweden

Gorecki, S.; Possik, J.; Zacharewicz, G.; Ducq, Y.; and Perry, N. (2020): A multicomponent distributed framework for smart production system modeling and simulation, Sustainability, Vol. 12, No. 17, 6969; doi:10.3390/su12176969

Kumar, A.; Srikanth, P.; Nayyar, A.; Sharma, G.; Pulipeti, S.; Krishnamurthi, R.; Alazab, M. (2020): A novel simulated-annealing based electric bus system design, simulation, and analysis for Dehradun smart city, IEEE Access, doi:10.1109/ACCESS.2020.2990190

Kumar, A.; Krishnamurthi, R.; Nayyar, A.; Sharma, K.; Grover, V.; Hossain, E. (2020): A novel smart healthcare design, simulation, and implementation using healthcare 4.0 processes, IEEE Access, Vol. 8, 118433-118471, doi:10.1109/ACCESS.2020.3004790

Kumar, A.; Sharma, D. K.; Nayyar, A.; Singh, S.; Yoon, B. (2020): Lightweight proof of game (LPoG): a proof of work (PoW)’s extended lightweight consensus algorithm for wearable kidneys, Sensors, Vol. 20, 2868, doi:10.3390/s20102868

Kim, S.; Chepenik, L. G. (2020): Use of computer modeling to streamline care in a psychiatric emergency room: a case report, Psychiatric Services, Vol. 71, 92–95, doi:10.1176/appi.ps.201900040

Kloock-Schreiber, D.; Siqueira, R.; Gembarski, P. C.; Lachmayer, R. (2020): Discrete-event simulation for specification design of products in product-service systems, Proceedings of the Design Society: DESIGN Conference, Vol. 1, 255-264, doi:10.1017/dsd.2020.295

Kiss, T.; DesLauriers, J.; Gesmier, G.; Terstyanszky, G.; Pierantoni, G.; Abu Oun, O.; Taylor, S. J. E.; Anagnostou, A.; Kovacs, J.; (2019): A cloud-agnostic queuing system to support the implementation of deadline-based application execution policies, Future Generation Computer Systems, Vol. 101, 99-111, doi:10.1016/j.future.2019.05.062

Zeng, W.; Baafi, E.; Walker, D. (2019): A simulation model to study bunching effect of a truck-shovel system, International Journal of Mining, Reclamation and Environment, Vol. 33, No. 2, 102-117, doi:10.1080/17480930.2017.1348284

Katsios, D.; Xanthopoulos, A. S.; Koulouriotis, D. E.; Kiatipis, A. (2018): A simulation optimisation tool and its production/inventory control application, International Journal of Simulation Modelling, Vol. 17, No. 2, 257-270, doi:10.2507/IJSIMM17(2)425

Ghafghazi, S.; Lochhead, K.; Mathey, A.; Forsell, N.; Leduc, S.; Mabee, W.; Gary Bull, G. (2017) Estimating mill residue surplus in Canada: a spatial forest fiber cascade modeling approach, Forest Products Journal, vol. 67, 205, doi:10.13073/FPJ-D-16-00031

Li, X.; Li, Z.; Wu, G. (2017): Lean precast production system based on the CONWIP method, KSCE Journal of Civil Engineering, doi:10.1007/s12205-017-2009-4

Abas, Z. A.; Ee-Theng, L.; Rahman, A. F. N. A.; Abidin, Z. Z.; Shibghatullah, A. S. (2015): Enhanced scheduling traffic light model using discrete event simulation for improved signal timing analysis, ARPN Journal of Engineering and Applied Sciences, Vol. 10, No. 18

Dagkakis, G.; Heavey, C. (2015): A review of open source discrete event simulation software for operations research, Journal of Simulation, Vol. 10, No. 3, doi:10.1057/jos.2015.9

Sterling, T.; Kogler, D.; Anderson, M.; Brodowicz, M. (2014): SLOWER: A performance model for Exascale computing, Supercomputing Frontiers and Innovations, Vol. 1, No. 2, 42-57, doi:10.14529/jsfi140203

King, D. H.; Harrison, H. S.; Chudleigh, M. (2014): “JaamSim” described in three simple examples, Proceedings of the Operational Research Society Simulation Workshop 2014 (SW14)

King, D. H.; Harrison, H. S. (2013). JaamSim open-source simulation software, Proceedings of the 2013 Grand Challenges on Modeling and Simulation Conference, doi:10.5555/2557668.2557669

King, D. H.; Harrison, H. S. (2013): Open-source simulation software “JaamSim”, 2013 Winter Simulations Conference (WSC), 2163-2171, doi:10.1109/WSC.2013.6721593

King, D. H.; Harrison, H. S. (2010): Discrete-event simulation in Java: a practitioner's experience, Proceedings of the 2010 Conference on Grand Challenges in Modeling & Simulation (GCMS '10), Society for Modeling & Simulation International, 436 – 441, doi:10.5555/2020619.2020678

Saturday, January 29, 2022

Teaching Simulation using JaamSim

Over the years we have tried to keep track of all the universities and colleges that use JaamSim for their courses in discrete event simulation. Typically, we ask the instructor to provide details about the course in the form of a post to the JaamSim forum under the topic "Teaching Simulation using JaamSim" (link). Some instructors have generously shared their course notes to make it easier for others to use JaamSim in their courses.

Included in the post are the following institutions:
  • Rutgers University, USA
  • The Wharton School, University of Pennsylvania, USA
  • United States Naval Academy, USA
  • Karlstad University, Sweden
  • University of Zaragoza, Spain
  • Swiss Distance University of Applied Sciences, Switzerland
  • Maastricht University, The Netherlands
  • University of Antwerp, Belgium
  • Derby College, UK
  • Conestoga College, Canada
  • Capilano University, Canada
  • University of Auckland, New Zealand
  • Universidad del Valle de Guatemala, Guatemala
  • Universidad de San Carlos de Guatemala, Guatemala
  • Universidad EAFIT, Colombia
  • Universidade Tecnológica Federal do Paraná, Brazil
  • Universidade Federal de Ouro Preto, Brazil
  • Universidade Federal Rural do Semi-Árido, Brazil
  • Universidade Federal de São João del-Rei, Brazil
More information about these courses and names of the instructors can be found by clicking on the link to the forum post given above or by searching for this topic in the JaamSim forum.

With 12,000 downloads of the software per year, we expect that JaamSim is being used at many more institutions than the ones listed above. If you would like to add your university or college to this list, please post the requested information about your course under this topic in the JaamSim forum.

Sunday, December 19, 2021

Optimum Number of Threads

The latest release of JaamSim (2021-06) has the ability to execute multiple simulation runs in parallel on a multi-core computer. The number of parallel runs is specified by the 'NumberOfThreads' input for the Simulation object. Although there is no upper limit on this input, specifying too many threads will result in excess context switching which reduces execution speed. So, how does one choose the best value for this input and how much faster will a set of simulation runs be completed?

The 'Run Progress' pop-up, which appears when the runs are started, provides a way to answer these questions.


In addition to showing the scenarios and replications that being executed on each thread, the number in red at the bottom left corner shows the estimated rate at which simulation runs are being completed. This value is updated every 5 seconds based on the amount of progress made since the previous update. The optimum number of threads is the one that generates the greatest number of runs executed per hour.

The only way to be sure that the right number of thread has been chosen is to experiment with the NumberOfThreads inputs for the model in question. In most cases, however, the optimum number will be the same for all models and determined by the type of CPU installed in the computer.  The following analysis was performed for two computers: a laptop computer and a desktop computer, both with Intel i7 processors that have four cores and hyper-threading. Potentially, these processors could provide up to eight threads (4 cores x 2 for hyper-threading).

The model used for the analysis was the 'Factory Example' model, which can be found by clicking Help > Examples > Factory Model. After setting the 'RunDuration' input to 30 years and turning 'Real Time' model off, the model was executed for a series of runs with various inputs for NumberOfThreads. Before taking any measurements, the model was allowed to run for one minute to allow just-in-time compilation to be performed by the Java virtual machine. All runs were performed with no other applications opened and with JaamSim's view and tool windows closed. The runs per hour value shown in the Run Progress pop-up was recorded when the first thread reached 50% complete. The process was repeated three times for each NumberOfThreads input and the average runs per hour was calculated.

The following graph shows the ratio of the runs per hour for a given NumberOfThreads input divided by the runs per hour value for one thread.


The graph shows that a NumberOfThreads input of 3 gave the best performance for the laptop computer and very nearly the best performance for the desktop computer. The poorer performance of the laptop computer with larger NumberOfThreads inputs is likely due to its limited cooling capacity. Note that the provision of hyper-threading on the Intel CPU did not allow a larger value to be used for NumberOfThreads.

The optimum run execution speed was about 2.2 - 2.5 times the rate for a single thread, which is about  73 - 83% of the potential processing power for three threads.

Although no tests were performed with CPUs that have more than four cores, it seems reasonable to guess that the following two formulae will hold approximately for other multi-core processors:

Optimum Number of Threads = (Number of Cores) - 1

Optimum Run Execution Speed = (0.8)(Optimum Number of Threads)

In some ways, it is disappointing that only three threads can be used for a CPU that is rated at eight threads. Nevertheless, the ability to execute a set of simulation runs 2.2 - 2.5 times faster is still a very satisfactory outcome for this new feature. Even bigger benefits should be available with newer CPUs that can have as many as 64 cores.

Thursday, July 7, 2016

Citing JaamSim and Updated Release Policy

Citations

We've seen an increasing number of papers written using JaamSim in the research process, to support this we've created some suggested text to help with citing JaamSim in your papers. Please cite JaamSim in the following way (Adjust the version as needed):

JaamSim Development Team (2016). JaamSim: Discrete-Event Simulation Software. Version 2016-14. URL http://jaamsim.com. doi:10.5281/zenodo.57118

For Latex users, the following Bibtex entry may be useful:

@Manual{,
    title = {JaamSim: Discrete-Event Simulation Software},
    author = {{JaamSim Development Team}},
    year = {2016},
    note = {Version 2016-14},
    url = {http://jaamsim.com},
    doi = {10.5281/zenodo.57118}
}


As you may have noticed, we also have created a Digital Object Identifier for JaamSim to help people with their citations.

JaamSim Downloads

We've changed the download locations slightly to take advantage of the github release feature, all versions will now have a permanent address hosting each of the pre-built JaamSim executables rather than being hosted directly on jaamsim.com.

The latest release will always be linked directly from the existing downloads page, but if you ever need an older version to replicate past results, they are now all archived at:


That's all for now, as always, if you have any questions please find us on the JaamSim forum.

Harvey Harrison



Saturday, April 23, 2016

JaamSim Software Inc.

We have formed a new company, JaamSim Software Inc., to develop, promote, and support JaamSim simulation software. The software will continue to be free and open-source, and we have no plans to introduce a "premium" version. Revenue will come from a range of services provided by the company:
  • Technical support contracts. Similar to commercial software vendors, we can provide on-call technical support with guaranteed response times to resolve technical problems.
  • Introductory courses. Courses will be offered in Vancouver, Canada for both new-comers to simulation and for experienced analysts wishing to convert to JaamSim from other software.
  • Onsite training and technology transfer. Customized training can be provided for groups of employees on their company's premises.
  • Paid development of new features. When a user needs a new feature to build a model, we do our best to provide it in a timely fashion. However, if the feature is too specialized to receive a high priority in our development cycle, it may not become available soon enough to meet the user's timeline. In this case, a user can pay us to program the feature at hourly rates. Paid features will become part of free, open source code in the same way as normal features.
  • Preparation of client models. When building a complex new model, it is often more cost-effective for a company to pay us to prepare and hand over an initial version of the model. It is much easier to modify a well-designed model than it is to build one from scratch.
  • Custom palettes of new objects. Complex, highly-detailed models can be built more easily using specialized objects designed for that application. For example, it would be better to build a model of a road network using specialized objects such as vehicles, roads, traffic lights, etc. than using more abstract objects such as SimEntity, Server, Queue, etc. We can prepare palettes of these specialized objects to suit any industry.
Along with the company, we have introduced a new logo for JaamSim, shown above. The logo represents the magical wine-bowl or "Jaam-e Jam" from Persian mythology, which I described in an earlier post. JaamSim's name refers to both "JAva Animation Modelling and SIMulation" and the magical wine bowl. Divinations from the bowl "... were said to reveal deep truths".

Contact Dr. Harry King at d.harry.king@gmail.com for more information about the services we can provide.

Sunday, November 30, 2014

The Fastest Simulation Software

One of my goals for 2014 was to perform the execution speed benchmarks described in my previous posts on a wide selection of mainstream simulation software packages. The software packages were compared by measuring the execution times for three basic activities:
  • executing a delay block
  • creating and destroying an entity
  • seizing and releasing a resource
Note that in previous posts, the third benchmark was the time to execute a "simple" block, which was taken to be one-half of the time to seize and release a resource. The new benchmark avoids this inference and presents the quantity that was actually measured.

Execution times were measured by counting the number of entities processed in 60 seconds measured with a stop watch. The posted results are the averages of three runs for each benchmark. All the runs were performed on the same laptop computer with a Second Generation ("Sandybridge") Core i5 processor running at 2.5 GHz. To make the results less machine specific, execution times were converted to clock cycles.

The following bar graph presents the results of the comparison.


Previous versions of this graph were labelled as "preliminary" to provide a chance for the vendors or other interested parties to improve the benchmark models for the individual software packages. Now that I have corresponded with most of the vendors and incorporated their suggestions, these results can be considered to be final for the specified version of each software package.

Revision 1: New results for Arena. Selecting the "Run > Run Control > Batch Run (No Animation)" option speeds up the benchmarks by a factor of ten. My thanks to Jon Santavy and Alexandre Ouellet for pointing this out.

Revision 2: New results for SIMUL8. Setting travel times between objects to zero resulted in a large improvement for the first and third benchmarks. The default setting for SIMUL8 is to assign a travel time in proportion to the distance between work centres, which increased the computational effort for these two benchmarks. The setting can be changed to zero travel time by selecting File > Preferences > Distance. My thanks to Sander Vermeulen for auditing and correcting the benchmark models.

Revision 3: New results for FlexSim. Execution speed was increased by approximately 10% by closing the “Model View (3D)” window during the run. The seize/release resource benchmark was also added to the result. After gaining more experience with FlexSim, it became clear that the Processor and Operator objects used for the first and third benchmarks are more complex objects than the simple seize, release, resource, and delay blocks that these two benchmarks are intended to evaluate. Since each object performs a series of actions to process an incoming entity, rather than just a single action, the results for the two benchmarks cannot be compared on a like-to-like basis with the other software packages. My thanks to Bill Nordgren for his help with the benchmarks.

Revision 4: New results for ExtendSim. Execution speeds were increased significantly by replacing the "Activity" blocks in Benchmarks 1 and 3 with "Workstation" blocks. The Workstation block is faster because it supports less functionality (pre-emption, shut-downs, and state statistics) than the Activity block, decreasing its overhead. It may be possible for ExtendSim users to increase execution speed further by creating a customized Activity block with any unneeded functionality stripped from the ModL code. My thanks to Peter Tag his guidance on ExtendSim.

Revision 5: New results for Simio. The entity creation and destruction benchmark was revised to use tokens instead of entities. All three benchmarks are now token-based. Tokens were used for the benchmark because they provide the same capabilities as the basic entity provided by some of the other simulation packages. The corresponding times for Simio's entity-based benchmarks are many times longer than the token-based ones. My thanks to David Sturrock preparing the new benchmark models.

Revision 6: New results for Arena. All three benchmark models were revised to use elements instead of modules to avoid unnecessary overhead. It is common practice for Arena modellers to avoid modules in large models where execution speed is an important factor. The module-based benchmark times are about 50% longer than the element-based times. My thanks to Alexandre Ouellet for preparing the new benchmark models.

Revision 7: Results for Simio 7.119. The latest release of Simio shows a significant improvement in execution speed for the seizing and releasing a resource (using tokens). Processing time was reduced from 21,000 clock cycles for release 7.114 to 4,300 clock cycles for release 7.119.

I should caution readers not to put very much importance on differences in execution speeds of less than a factor of two. Ease of use and suitability for the system to be modelled will always be the primary criteria for selecting simulation software. Execution speed is only important if it is a significant time saver for the analyst (impacting ease of use) or if it is required for the model to be tractable (impacting suitability for the system to be modelled). For some systems, even very slow software will be quite fast enough.

To put the execution times in perspective, consider the number of times an activity can be performed in one minute of processing time on a 2.5 GHz laptop computer. An activity requiring 150 clock cycles can be executed one billion times in one minute. One requiring 1,500 clock cycles can be executed 100 million times in one minute. Even one requiring 150,000 clock cycles can be executed one million times in one minute. In most cases, the fastest benchmark times measured by this exercise will be important only for very large models that require hundreds of millions of blocks to be executed over the course of each simulation run.

SLX holds the crown as the fastest of the simulation packages with times of 60, 110, and 230 clock cycles, however, it differs from the others in that it is strictly a programming language and lacks any drag & drop capabilities. JaamSim is the fastest of the simulation packages that include a drag & drop interface, with times of 240, 530, and 2,400 clock cycles. Arena, SIMUL8, AnyLogic, and Simio turn in solid results in the 1,000 - 10,000 clock cycle range. ExtendSim is significantly slower, with one benchmark time that exceeds 20,000 clock cycles. The results for FlexSim are not directly comparable to the other software packages and are provided for reference only (see the notes for Revision 3).

Sunday, September 7, 2014

Another Big Performance Increase for Release 2014-36

Faster Event Processing

Harvey Harrison has done it again -- event scheduling and processing is about 50% faster in this week's release (2014-36). On my laptop, the benchmark runs at nearly 8 million events/second compared to 5 million last week. It wasn't many months ago when I was excited about 2 million events/second. The following graph shows the full results for the benchmark.


SLX Software

A new entry on the above graph is the result for SLX from Wolverine Software. The developer, Jim Henriksen, had sent me this result several months ago along with the input file. The results for other two benchmarks were described in a post Jim made to the LinkedIn group for the Society for Modeling & Simulation International a few weeks ago. You can read his post here

Ideally, I should re-run the SLX benchmarks myself, but have not found the time to explore SLX enough to do so yet. In the interest of keeping things moving, the results Jim provided are shown in all the latest graphs.

It is fair to say that SLX sets the gold standard for execution speed. This is one of its key features, and SLX achieves this status by being the only simulation software to provide a compiler for its proprietary programming language. Somewhat more effort is required to build an SLX model -- there is no drag and drop model building environment -- but the pay-off is the shortest possible execution time.

Concluding Remarks

With Harvey's latest improvements, JaamSim processes events about 6 times faster than Simio, but is still about 2.7 times slower than SLX. JaamSim's event processing code is unlikely to get any faster -- all reasonable optimizations have been made already -- so we will have to accept that SLX is faster in this area. To be completely honest, we were pleasantly surprised to have got this close to SLX.

To put the event processing speed differences in in perspective, consider a model that requires one billion events to be executed. On my 2.5 GHz laptop, the event processing portion of the simulation run would total 13 minutes in Simio, 2.1 minutes in JaamSim, and 0.8 minutes in SLX. For this number of events, the difference between SLX and JaamSim amounts to only 1.3 minutes out of what is likely to be a much longer total duration for the run. Nearly 10 billion events would be required for the difference between SLX and JaamSim to reach 10 minutes. Beyond 10 billion events, SLX is likely to have a significant advantage over JaamSim. Below 1 billion events, the difference is likely to be insignificant.

Now that we have good performance for event processing, our next objective is to bring JaamSim's event creation/destruction time into line with the other benchmarks. SLX achieves its excellent result for this benchmark by providing an internal entity pool, so that used entities can be recycled instead of being destroyed. Up to now, we have shied away from an entity pool for JaamSim, preferring to focus our attention on making entity creation more efficient. Moving forward, we may have to reconsider various ways to implement an entity pool.