Collecting Usage Statistics of Dynamo - Revit Scripts Into A MySQL Database (With A Guide) PDF
Collecting Usage Statistics of Dynamo - Revit Scripts Into A MySQL Database (With A Guide) PDF
In my previous article, I have tried to convey some thoughts the value of Dynamo scripts for
consultancies as well as benchmarking those scripts against metrics. What I did not elaborate
as much was how the data for these metrics can be collected in the first place. Which is what
this article is about, or more closely part discussion part tutorial in the second half.
First, a reminder, why do we need to collect data. When trying to measure the impact that the
developed Dynamo scripts or workflow, in general, has on the office, other engineers, architects,
we cannot rely on everyone to constantly keep track of their Dynamo usage or personal
feedback only. A requirement for users to log their experience with Dynamo scripts every single
time would become a bottleneck in the design process which is exactly what we are trying to
overcome by implementing computational BIM design through software like Dynamo and
Grasshopper. Exact metrics are necessary to overcome potential biases when collecting
Iniciar sesión Únete ahora
personal feedback. A user may prefer certain scripts that can increase his efficiency but maybe
those Dynamo scripts aren't used by others that much and do not represent the direction of the
workflow being developed for the company to maximize the overall company efficiency. In
addition, reliably estimating the value generated by a certain Dynamo script will revolve around
measuring the time saved as compared to a manual way, and for that an exact count of the
times the developed Dynamo script was run is crucial. More on this subject and the way to
benchmark the value can be found in my previous LinkedIn article.
Returning to the topic, as I have slightly drifted away, the best solution to the above issues is an
automatic way to process usage collection and there are multiple ways data collection can be
carried out, integrated into Dynamo scripts. I will focus on two approaches here:
Collecting data into text logs on individual machines;
Collecting data into MySQL databases.
I will try to explore both options and present ways to implement this, with mini-tutorials for
Dynamo and Revit.
In total, there are only 8 nodes, including 1 Python node. The first step is to specify the location
of the text log file. It is best to have it in a centralized location maybe on network storage but
could be stored individually for each user on their machine. But the file path must be carefully
provided, regardless. This script does not create an actual text file, so an empty file should be
created before specifying it as a string in Dynamo. 2) I have included manually specified script
identification and script version parameters, that are passed to the 3) Python script that collects
other information (the code is shown further down the article). 4) The retrieved data are
converted to strings and then joined together into a single line in 6). To separate the values, later
on, a delimiter is specified in 5). It can be changed to whatever you want. 7) As the whole data is
now formatted into a single string, we would need a simple way to insert it into new lines but not
the end of the text log, for this we add a newline character specified by \n to the end of the
string. Finally, at 8) we tell Dynamo to write that string to the file we specified at the start.
The Python script does most of the information gathering, one reason to use it is to reduce the
number of Dynamo nodes within the script. The other reason - some data cannot be obtained
with default Dynamo nodes or relying on additional Dynamo packages. The code is presented
below with comments provided after # symbol for those not familiar with the syntax. I am sure
someone will ask for the code in text so they can just copy it. Consider typing this to be your
homework if you intend to implement this, hence I will not provide the option to copy paste it.
One additional comment that I wanted to mention, the script version and script id that are
connected to this Python node, appear only in the output variable, as the data is just passed
through. The question you should ask, why not include it directly in the code? Well, this would
mean that for each script the Python code would have to be edited, having some variables
outside can be simpler. One more remark: if the collection will be very widespread, on dozens or
more scripts, it would be best to maintain the Python codes inside custom nodes. That way,
future changes can be somewhat centralized, as only the custom node would need to be
Iniciar sesión Únete ahora
changed and not each script individually.
Finally, the outcome after 2 quick test runs is the following text log (I have pixelated some semi-
sensitive information). As you can see, the data is presented in a relatively clean fashion. The
order of data can be changed in line 42 of the Python script. It is really up to preference.
The inside of the Python script can look intimidating, but it really isn't. I have commented on all
the key lines of code. The principles are simple, you have to initially link the folder that contains
the MySQL.Data.dll library. Then it comes down to defining the MySQL queries, which is where I
don't go into detail, as I assume you will have some knowledge of the statement syntax. The idea
is to provide those queries as strings or multiline strings, hence the triple " symbols. Now we
open up the connection to the database with the specified connection data and essentially
execute the queries against the database. The last bit of code retrieves the primary key, a unique
identifier of the data row inserted into the table. I have defined an additional condition to retrieve
the primary key where the time is the same as the one we inserted.
The last Python code block handles the updating of the table entry after the script completes or
fails to complete. It will take the final value from the final Dynamo node of the main part of the
functional script and check its output. Most cases, if it fails it will output "null". But in other
cases, when let's say some node is not properly connected and can't function, it will output
"Function" when previewed in Dynamo. However, it is actually expressed as
"ProtoCore.DSASM.StackValue". Therefore I used a simple conditional check if the output value
is "null" or "ProtoCore.DSASM.StackValue", the script doesn't execute the MySQL part and only
outputs a simple status of the node itself. If the output is anything but those values, it will
execute the MySQL query to update the table.
The stats get updated regardless if the Dynamo script is run through Dynamo itself or Dynamo
Player. Normally, this will not delay the script execution by much, maybe a second or two,
depending on the number of data you are collecting of course and the speed of your server
where the database resides.
The final outcome, when viewed in MySQL Workbench would look something like shown below. I
quickly ran the script three times, failing the script the first time on purpose, so you can see the 0
highlighted in yellow. That column represents the status of the script, if it completed properly, it
will be shown as 1, otherwise, it will be 0. Which can be beneficial for troubleshooting when
more information like inputs are collected as well.
Final Thoughts
One should keep in mind, that it is impossible to include every detail into the collected data and
that regular feedback sessions should be held. A user can still provide some insight and give you
information that you might not have thought of collecting automatically.
Iniciar sesión Únete ahora
If your company is looking seriously at implementing visual programming based workflows, or
any computational BIM design solutions, data collection and statistics for estimating actual
value created will become unavoidable at some point. Implementing usage tracking inside
Dynamo / Grasshopper scripts can provide the insight on which scripts pay off for the
consultancy and which did not perform as expected. Such knowledge can help guide the
process over time. In addition, integrating data collection inside every Dynamo script can enable
more exotic solutions in the future, such as the application of Machine Learning, provided
sensible data is collected. Opportunities are plentiful when you have enough resources, and in
this digital age, the key resource is data. Collect it wisely, use it intelligently, guard it firmly.
Regimantas Ramanauskas
Regimantas
BIM Manager / Dynamo and Revi… Follow
Ramanauskas
11 comments
Claus Andersen
Recomendar Responder
German Rojas
German BIM Manager en Constructora Colpatria | Liderando la Transformación Digital de la Construcción desde12 h
Rojas BIM
Fantastic guide Regimantas Ramanauskas Thanks! ***If you don't mind, I have a couple
of simple questions... If the database is host in the Cloud, would the same workflow
work? (Changing the connection settings of course)... And... Could be the dll of MySQL
hosted in the Cloud too?
Recomendar Responder
Adam Lamping 13 h
Adam Global Lead Digital Asset Lifecycle at Arcadis
Lamping
Partha Sarkar Andrew Victory
Recomendar Responder 1 recomendación
Vytautas Tamulėnas 16 h
VytautasBIM Manager | Revit & Dynamo Specialist
Tamulėnas
Superb! Such a clear way of justifying the investment in scripts!
Recomendar Responder