# Introduction about the customized module

* The customized module are some definitions of a class(called "D3plotReaderPro") that is encapsulated by LS-Reader. 
  The class is designed to do some special works. In brief, the class use the raw data from LS-Reader to complete 
  complex tasks. For now, the class supports:
    nodal average results of solid signed von mises stress
    nodal average results of solid signed von mises strain
    nodal average results of solid p1 stress
    nodal average results of solid p2 stress
    nodal average results of solid p3 stress
    nodal average results of solid p1 strain
    nodal average results of solid p2 strain
    nodal average results of solid p3 strain
  These method will return a dictionary(node user ID to nodal average results). 

* Tree 
    │  lsreaderPro.py (class definitions)
    │  README.txt  (this file)
    │
    └─test (test folder)
            test.py (show how to use the module)

* How to use?
  from lsreaderPro import D3plotReaderPro
  from lsreader import D3P_Paramter as dp
  
  dr = D3plotReaderPro("your/d3plot")
  p = dp()
  # set the parameter
  p.ist = 3
  p.ask_for_numpy_array=True
  node_nodalAverageSignedVonMisesStress = dr.solid_nodal_average_signed_von_mises_stress(p)
  ...

* Notes
  use 0.1.41 or higher

* Why we should encapsulate LS-Reader instead of adding complex APIs in LS-Reader directly?
  Actually, we can consider LS-Reader as a translator of LS-DYNA result files(like d3plot, d3thdt, binout...).
  Its main purpose is to return the raw data as some array(like float array, string array...). When users use LS-Reader,
  it is necessary that users have to manage the memory. Users can use some APIs(like D3P_NUM_SOLID) to get the size of
  memory. It is easy to manage the memory for the raw data(like element results). But if LS-Reader supports complex APIs, for example, 
  D3P_SOLID_NODAL_AVERAGE_*, users have to calculate the number of nodes for solid elements firstly(the number is not raw data), 
  allocate the memory and call the corresponding APIs to get the data. Users have to loop over the raw data twice or more.
  But if user use the raw data APIs of LS-Reader directly to complete this task, it will be efficient. Because user have more convenience.
  Users can organize these raw data with greater efficiency according own requirements.
