1. The more dimensions in your parameter space the longer it will take to find an optima in general. Something like grid search is exponential in the number of dimensions while we find that in practice our methods are closer to linear (something around 2D - 10D, where D is the number of dimensions). While there is no explicit "limit" to the number of parameters (in the code or algorithm) we tend to see the best results when they are < 20 as Peter Frazier suggests as it allows us to find an optima much quicker (in the number of samples).
2. You provide a single metric (Overall Evaluation Criterion) to SigOpt, this can be a combination of many sub-objectives. We talked a little about this in the MOE docs [1] and are happy to help people brainstorm on the best objectives. This is a very important part of the process as Microsoft points out in a recent paper (section 3.1) [2].
3. We can give you back experimental designs with n>1 points. We condition on any outstanding experiments being run too. Some of this work was developed in my thesis [3]. We have some docs on this too [4].
4. We do support the notion of "untunable" parameters in beta (releasing soon). In a given experiment if you know you need to use a specific chemical we can hold that constant while optimizing the other free parameters so that you can run a single batch with that specific constraint (or a set of constraints). I think this covers what you are asking, but I am happy to dive deeper if not. I'd be happy to help you set up one of these experiments if you are curious.
1. The more dimensions in your parameter space the longer it will take to find an optima in general. Something like grid search is exponential in the number of dimensions while we find that in practice our methods are closer to linear (something around 2D - 10D, where D is the number of dimensions). While there is no explicit "limit" to the number of parameters (in the code or algorithm) we tend to see the best results when they are < 20 as Peter Frazier suggests as it allows us to find an optima much quicker (in the number of samples).
2. You provide a single metric (Overall Evaluation Criterion) to SigOpt, this can be a combination of many sub-objectives. We talked a little about this in the MOE docs [1] and are happy to help people brainstorm on the best objectives. This is a very important part of the process as Microsoft points out in a recent paper (section 3.1) [2].
3. We can give you back experimental designs with n>1 points. We condition on any outstanding experiments being run too. Some of this work was developed in my thesis [3]. We have some docs on this too [4].
4. We do support the notion of "untunable" parameters in beta (releasing soon). In a given experiment if you know you need to use a specific chemical we can hold that constant while optimizing the other free parameters so that you can run a single batch with that specific constraint (or a set of constraints). I think this covers what you are asking, but I am happy to dive deeper if not. I'd be happy to help you set up one of these experiments if you are curious.
[1]: http://yelp.github.io/MOE/objective_functions.html
[2]: http://www.exp-platform.com/documents/puzzlingoutcomesincont...
[3]: https://github.com/sc932/Thesis/blob/master/ScottClark_thesi...
[4]: https://sigopt.com/docs/overview/parallel