# simulation over a simulation result (sum of random sample, where number of elements of the sum is simulated)

Hello, I have a modelisation question :Â

for a business-plan like problem, I need to generate :Â

- a random number of projects (let's say a Binomial)

- a random number for the value of **each** project (let's say a Normal)

There is also a project-type index, that I use for the parameters of the simulation (for example, (n,p) parameters of the Binomial that models the number of winned contracts , and (mu,sigma) for the parameters of the value of contracts, are indexed by project-type).

My question is : how to use a simulation result (the number of projects, per project type) as an input of another simulation (that is, a sum of Normal, whose parameters mu,sigma are also indexed by project type)?Â

Specifically, regarding these 2 topics :Â

- knowing that Functions that create lists cannot accept parameters that are array, how to create a function that can creates a sum of variable size?

- my (very basic) understanding of MC simulation in Analytica is that there's a "Run" Index behind, how to use a simulation result that depends on another simulation result? Do I have to generate Normal Samples, and generate a random SubIndex to simulate my sum of Normal samples?Â

Â

many thanks in advance!Â

To highlight the gist of your question, this WOULD NOT work

Chance Num_of_projects ::= Binomial( 50, 0.6 ) Index Project ::= 1..Num_of_projects

The problem here is that the index would size to the Mid-value (median) of Num_of_projects, 1..30 in this case, but you want the number of projects to vary by Run. It wouldn't be long enough to handle half the projects.

What you should do in this case is set the length of the Project index to the maximum length,

Index Project ::= 1..50

Or instead of hard coding 50, you could also use Max( Sample(Num_of_projects), Run ).Â Then define your value as

Chance Value_of_project ::= If @Project<=Num_of_projects Then Normal(0,1, over:Project) Else Null

Once you have the Null in there, Analytica will handle Null cells for you pretty much seamlessly in all downstream variables, so you don't have to repeat the check for @Project<=Num_of_projects in downstream variable definitions.

Regarding your project-type index, I imagine you have something like this:

Index Project_type ::= 'A'..'H' Chance Type_of_project := If @Project<=Num_of_projects Then ChanceDist( 1, Project_type, over:Project ) Else Null Variable Winned_contracts_by_type ::= Table(Project_type) Variable Mu_by_type ::= Table(Project_type) Variable Sigma_by_type := Table(Project_type) Variable Winned_contracts := Winned_contracts_by_type[ Project_type = Type_of_project ]

Â With that, the previous definition would be

Chance Value_of_project ::= Local mu := Mu_by_type [ Project_type = Type_of_project ]; Local Ïƒ := Sigma_by_type [ Project_type = Type_of_project ]; Normal(mu, Ïƒ)

Since mu and `Ïƒ`

Â are indexed by Project, and are Null when @Project is past Num_of_projects, you can drop the IF and the Â«overÂ» from the previous definition -- it is now redundant.Â

> knowing that Functions that create lists cannot accept parameters that are array, how to create a function that can creates a sum of variable size?

Most functions that create lists have an option to include a result index (usually an optional parameter named Â«resultIndexÂ»). If the result index you provide is longer than necessary, the result is null-padded. Hence, you could do something like:

Subset( cond, resultIndex: Project )

This is one way of handling it. Another way is to use the list generating function in a scope where its parameters are guaranteed to be scalar. That will enable the call to the function to complete successfully. Dimensionality declarations are typically used to guarantee the dimensionality within a lexical scope. For example:

Local n[ ] := Z Do Product( 1..n )

In this example, 1..Z would complain if Z where array-valued, but within the context of the Do, n is guaranteed to have no indexes -- i.e., guaranteed to be a scalar -- because there are no indexes listed in the brackets, which is n's dimensionality declaration. Analytica structures the evaluation order so as to guarantee that n will be a scalar. The example here is equivalent to Factorial(Z), of course, but I just used it to illustrate.

When you use this trick, you want to make sure that the result of the expression inside the Do is not a list, because then you have a second problem -- arrays can have only one list dimension. Instead, what you want to do is to make sure that you re-index the list dimension only a real index dimension, or you can put the value inside a reference (which would be like an array of lists). The reference example looks like this:

Local n[] := Z Do \(1..n)

I would classify references as an advanced concept, so I recommend using real indexes whenever possible. For example, this would re-index only the Project index, null-padding when n is shorter than Project.

Local n[] := Array( Project, 1..n )

>how to use a simulation result that depends on another simulation result? Â Do I have to generate Normal Samples, and generate a random SubIndex to simulate my sum of Normal samples?Â

You generally don't need to do anything special when one variable depends on another. Let Analytica worry about that for you. If you are collecting samples to compute something, there is a good chance you are thinking about it wrong. Think of each variable as possibly having uncertainty, but try not to focus on the algorithmic aspects of how the uncertainty is propagated, but rather just on how one variable depends on the previous one. Analytica can handle things like Normal( Î¼, Ïƒ ) where Î¼ and Ïƒ are themselves uncertain.Â

There are some instances where you would do meta-level MC (i.e., an MC inside an MC). I think that 99% of the time I see this, the people doing it are confused and should not be. Keep your MC to one level. You don't need multiple levels to differentiate between different "kinds of uncertainty". As known since the 1950s, all kinds of uncertainty obey the same rules of probability and almost always collapse into a single level of uncertainty.

You may also find these of use:

â€¢ The "EVSI for further treatment trials.ana" in Example models / Decision Analysis folder

â€¢ Writing Array-Abstractable Definitions

- 4 Forums
- 86 Topics
- 282 Posts
- 1 Online
- 1,851 Members