source code of trade PsOBB variants

23/11/2011 18:25

Below are some drafts and source codes. See also the Particle Swarm Central (useful links about people, papers etc.)
Last item added or modified (2011-11-04):  L-DE, a "geometrical" Differential Evolution variant

 
 Title
 Comments
 Last update
 Download
DEMO/Fun A simplified adaptive version (C source).No parameters to "guess", and you can add your own objective function.NEW: you can now solve Graph Coloring Problems. Also, you can generate a data file for post-mortem animation. If you want, you can try some options like local queens or Guided Random Generation.  2001-11-12 Some animations(zipped)

ANSI C source code + examples (zipped)

Cheap_PSO (Think locally, act locally)(.pdf + .doc, zipped) (not up to date)

Note: see Tribes for a more recent (and quite different) PSO version

D E M O/Animation A small animation showing how a swarm generates subswarms around suboptima for Alpine function.  2000/10/24 Animated  GIF (zipped)
Some math about Particle Swarm Optimization (PSO) Analytical view. Explicit and Implicit Representations. Particle in Complexland. Convergence and Spaceof States, constriction. 

(quite big. you should better download the PDF file)

 1998/11/27  PDFfile, 182Ko
Algebraic View Basic case. Eigenvalues. Cycles.  1998/04/30  PDF file(zipped)
The Alpine Function Many local optima, just one maximum.  1998/11/24  PDFfile (zipped)
The Most Stupid Algorithm (MSA) Random search. Cost of the algorithm.  1998/11/24  PDF file(zipped)
Performance Maps Deterministic PSO. Convergence time vs two parameters.  1998/11/24  PDFfile (zipped)
Mechanical Analogyspring picture Two parameters. Virtual springs.  1998/11/24  PDFfile
 The Simplest PSO Deterministic, one equation,one parameter. No-hope/Re-hope method. Alpine and Banana functions.  1998/12/11  PDF file(zipped)
 The  Swarm & The Queen  A great improvement of the "Simplest PSO", particularly thanks to a gravity center (the "queen") and a better adaptive No-hope/Re-hope method.  The paper has been presented atCEC99.   1999/09/22  Play with an worksheet(2D Alpine function, French Excel version, 440Ko). 
Slides (1999/07/26): Powerpoint file (Macintosh, may work on some PC)(zipped)
Online (Web pages)
Draft PDF version of the paper (slightly updated in 2001) (zipped)
 A little fuzzy toolbox  Fuzzy set computation tools to prepare a fuzzy PSO. Arithmetic functions and some classical ones (exp, log, sin).  1999/03/28  ANSI C sourcecode
The Particle Swarm: Explosion, Stability, and Convergencein a Multi-Dimensional Complex Space by Maurice Clerc and James Kennedy. 

Big paper about Algebraic and Analytical views, with examples. Published by IEEE in 2002, so I can give you here just the draft.    2005 IEEE Transactions on Evolutionary Computation award ($1,000 for Jim and me ;-) )

1999/04/29 
 
 
 

2002-04

 

Slides (1999/07/26): Powerpoint file (Macintosh, may work on some PC) online (Web pages).Presentations_5D/index.htm

Draft (zipped MSWord file)
 

DiscretePSO: Traveling Salesman Problem (v. 4.2).  PSO  can work on finite state spaces and with non continuous functions. Here is an example: looking for shortest cycles in a valuated graph. The last version is more clean, and strictly written in ANSI C. 2000/02/29
2003-04-23
.pdf file (4.2,zipped)
C sources (4.4, zipped)
ANSI C sources (zipped)
DiscretePSO: Fuzzy Combinatorial Black Box When  running a combinatorial black box with PSO,the problem is that, depending on parameters, results can  be quitegood ... or worse than random or exhaustive  search. If you have noway to guess what could be the best parameters, use fuzzy representations:you will never have very bad results (and almost never very good ones). 2000/04/28 PDF file(zipped)
Optimisation par essaim particulaire Présentation faite à Paris lors de la JET5 2000/05/05 Fichiers PowerPoint zippés OEP1erOPE2
When Ant Colony Optimization does not need Swarm Intelligence A very short paper formally proving that ACO, as defined by Marco Dorigo in 1999, does not really use swarm intelligence. The underlying idea is to analyze ACO in order to see if some principles could be useful to improve PSO. 2000/08/08 PDF file (zipped)
 OEP et coloriage de graphe pour l'affectation defréquences (PSO and Graph Coloring for Frequency Assignement)  (in French, expurgated version). A hybrid system using an adaptive  PSO and some more classical algorithmes gives interesting results  2001-07  PDF file (zipped)
PSO and Information What kind of information PSO uses to find a solution and how it uses it? First part (draft) of this study 2001-11-16 MSWord file and PDF file(zipped)
A few words about constriction A conjecture and a rule of thumb to find "good" parameters 2001-12-11 MSWord file and PDF file(zipped)
The old bias and its solution  All PSO works using randomness also use biased formulas. It does not prevent to converge, but in increases the variance of a set of runs. This short paper explain the bias and how to avoid it 2001-12-12
 
 
 
 

2003-07-10

MSWord file and PDF file(zipped)
 
 
 
 

Warning. This version is also biased! See below Uniform hyperspherical distribution

F(?)AQ Not so F, but you quite often ask me some questions. Here are some answers. Don't hesitate to write me, however. 2006-03-11  
Think Locally, Act Locally: A Framework for Adaptive Particle Swarm Optimizers In Particle Swarm Optimization, each particle moves in the search space and updates its velocity according to best previous positions already found by its neighbors (and itself), trying to find an even better position. This approach has been proved to be powerful but needs parameters predefined by the user, like swarm size, neighborhood size, and some coefficients, and tuning these parameters for a given problem may be, in fact, quite difficult. This paper presents a framework for designing adaptive particle swarm optimizers without any "supervisor", in order to obtain autonomous methods in which the user has almost nothing to do but describe the problem. To illustrate the adaptation process, a possible implementation for a simple PSO version is given, so that we can examine how different, and globally better, is the behavior of the swarm on some classical test functions. 2002-07-01


2004-10-24



Zipped .doc file.  
Tribes, a parameter free PSO Particle Swarm Optimizers usually need some predefined  parameters, i.e. numerical coefficients, swarm size, neighbourhood size and topology. It is not the case with the present version thanks to two techniques: hyperspheres instead of hyperparallelepipeds for proximity areas, and adaptation of the swarm size as well as the relationships between the particles.  2004-05-05




 
 
 
 

 

.doc paper (zipped)
Warning: older than the C source code
 

 

Tribes and Strategies Here is the last "public" source code. Different strategies depending on the "past" of the particle, non uniform distributions for proximity areas (Gaussian, for example), fuzzification, noise, recursivity, etc.),
"no clamping and no evaluation" option, etc.

C version contains  a QAP (Quadratic Assignment Problem) option and a Traveling Salesman Problem (TSP) option,. Not that good, but may be useful for small graphs (say 30 nodes). The multiobjective option can also be used to cope with indicative (soft) constraints.
2006-11-10



2006-10-26



2006-09-21

2006-01-03










2005-03-25
 
 

2004-10-04
 
 
 
 













 
Fixed some bugs in the simplified Java version. Results are now far better (in particular for Schwefel+noise, and Elliptic)

Fixed a bug in the simplified Java version (about info-network)

Slightly simplified Java version

Java version. It includes the "sunnyspell" method.  I have  added the 24 constrained problems of CEC 2006. Also you can now safely launch several problems on the same time.
 
Zipped C sources of last public version

Added 14 of the 25 functions of the CEC05 benchmark set. If you want to use them, you must also download this BM repertory

Added a pseudo-gradient method to choose the best informer. Valid only if the search space is a metric one, so it is an option.

This version now contains a correct hyperspherical distribution. The independent Gaussian distributions are quite good for small problems partly combinatorial.

A new distribution has been added (positive ellipsoïdal sectors).

You can now use any kind of discrete variable, by giving the list of acceptable values.

Some "classical" examples have been added (gear train, pressure vessel, coil spring).  At this date, result are slightly better than the best published ones, without any parameter to tune.
 

Tribes and MO A small "paper" about Multiobjective problems with Tribes, compared to MOPSO. Three examples are given, but the above source code contains far more (see myfunction.c). 2003-05-02 Obsolete, see TRIBES-D
Distorted Gaussian Distribution In order to escape "the old bias", Tribes can use different kind of distributions, for example spherical ones or Gaussian ones, but they are all symmetrical. This one is not, "distorted" along an axis, typically p_i <->p_g (best previous position <-> best previous position  in informer group). For it can easily be adapted to any PSO version, I put here just a "generic" C code, using two points A and B. Of course, in real use, you take only one point of the distribution. 2003-06-25 Zipped C source
Uniform Hyperspherical Distribution Last week, Rui Mendes pointed out my "unbiased distribution" was in fact biased. So I have written a new version that should be perfect (and faster, anyway). It is now included in Tribes. 2003-07-10 C source (zipped)
Semi-continuous challenge In order to improve Tribes, I need help to solve these six problems. What are the best PSO versions and the best parameters? Is it possible to define some rules to find these good parameters? For amatheurs, I have added a few words about "theoretical difficulty" and  "handling constraints by homeomorphism". 2004-02-20 .pdf version
OEP You wanted it, here it is. The parametric PSO I used for the Semi-continuous challenge. Note that comments are in French ... 2004-10-14 C source (zipped)
PSO_basic Between the too_many_parameters OEP and the no_parameters_at_all Tribes, this simple version may be enough for some people 2004-08-09 C source (zipped)
A peer reviewed electronic paper (in French)
M. Clerc et P. Siarry, Une nouvelle métaheuristique pour l'optimisation : la
méthode des essaims particulaires, J3eA, Vol. 3 (2004) 7
2004-09-20
PDF file
Binary PSO: toolbox, derivations, and mathematical insights Still a draft, and quite long (40 p.) but you may be interested
 
2005-02-02
2005-02-02
Open Archive HAL          
 C source (zipped)
Publicité :
Un livre en français, chez Hermès-Lavoisier. 260 pages, 70 €
L'optimisation par essaims particulaires. Versions paramétriques et adaptives
Contient pas mal de choses jamais publiées, ni ici, ni dans des revues, par exemple sur les distributions de possibilités ou les critères de convergence. Quelques annexes techniques pour les "amatheurs".
2005-01 ERRATUM (2005-02-21)
p. 138 
9.2.3.4
remplacer "K est un fait un maximum" par "K est en fait une valeur moyenne"
9.2.3.6 Remplacer 'Informatrices N" par "Exploratrices N"

p. 239 
Remplacer la dernière formule par
xt+1=xt+vt+1+(p-xt)(1-khi')
 
Stagnation analysis or "What happens when nothing happens". Thanks to this mathematical analysis, better confidence coefficients for parametric PSO are defined 2006-08-05




2006-01-19

2005-06-19
version edited by Riccardo Poli (better English but not complete)
PDF   here or on the XPS technical reports page (ref. CSM-460)

Complete PDF file on Open Archive HAL (see in particular Figure 4)
First version (not online anymore)
SunnySpell Just a way to add a new particle not anymore purely at random 2005-08-26  
Ad.
Particle Swarm Optimization, ISTE
This book contains some things never published in English even here, for exemple about non uniform distributions of possibilities, convergence criteria, rumor spreading, etc. This a translation of the 2005 book in French, with a few updates. 2006-02  
Confinements and Biases Because of the confinement, PSO performance is depending on where the solution point is (near the center of the search space,  near the bound, etc.). Nine confinement methods (including no confinement at all) are studied. An hybrid more robust one is suggested.  2006-03-12 Open Archive HAL
When a particle leaves the search space the "no clamping and no evaluation" method is the simplest one. However it may happen that all particles are then moving outside the search space, and the algorithm never stops. Also, even if they induce a bias, some confinement methods may be more effective.
Some ideas about PSO Some of them are so crazy that they might work. Some others seems reasonable but may not work. 2006-11-06 PDF file
Comparing two algorithms How to compare two stochastic algorithms from an user point of view 2006-07-04 Link to the corresponding item on the Math page
Non parametric optimisation for unimodal functions: DIchotomy In order to improve Tribes as long as it seems the fitnesse function may be unimodal, we need a specific strategy. Here is one (by dichotomy). 2006-08-14


2006-07-13
New version, that includes a non unimodality test: SciLab/MatLab code
Swissknife PSO
Contains a lot of options, in order to "simulate" most of interesting PSO variations. 2007-03-01 C source
Back to Random topology A small paper re-explaining the method used in some of my "basic" PSOs, why it is equivalent to some "new" ones, and suggesting how it could be improved. 2007-02-26 PDF file
When Nearer is Better Most of iterative optimisation algorithms, if not all, do assume that "nearer is better", more or less.  A mathematical definition, and some fascinating derivations. 2007-03-22 Open archive HAL
TRIBES-D A simplified version for real heterogeneous problems. It easily finds the best known solution for the classical "Pressure vessel" problem.  For multiobjective problems, it is also far better than the previous TRIBES version. 2008-02-01 C code and some results (ZIP file)
Simple Binary PSO Coming partly from the binary toolbox,  a C program easy to use and nevertheless quite effective 2010-09-08
2008-05-11
2008-04-30
C source (zipped)
Bimetric Binary PSO Coming from the previous one. Even simpler, and slightly better. 2008-11-04 C source (zipped)
Balanced PSO Variable swarm size
More options (orthogonal walk, bi-strategy, etc.)
More options (variable ring topology, orthogonal learning, etc.)






Contrarily to often claimed, Standard PSO does not ensure a good balance between exploration and exploitation. This small paper explains why, and suggests then an improved variant.
2010-04-15
2010-03-17
2010-02-28
2010-02-15
2010-01-09

2009-09-30
2009-01-12
2009-01-05
2008-12-24
2008-10-31
C source (zipped)
C source (zipped)
C source (zipped)
C source (zipped)
C source (zipped)

C source (zipped) Version for the Handbook of Swarm Intelligence


PDF paper (2008-10-3, obsolete, there were a few bugs)
Initialisations PSO can be significantly improved, just by using a good initialisation scheme for the positions and the velocities 2008-12-24
2008-12-14
PDF paper
See Balanced PSO
A method to improve standard PSO Sometimes you know that your problem belongs to a given class. For example : quite low dimension (say smaller than 30) and weakly multimodal. In such a case, it is possible to design a specific PSO more efficient than the standard one. Here is a method. 2009-03-13 PDF paper
  Universal Random Number Generator (in Maths Results, but useful for some PSO variants) 2009-12-22  
A Mini-benchmark Four carefully chosen problems, and some results with  Standard PSO 2007 and Variable PSO (see below). Very useful to quickly estimate whether your own brand new algorithm is really promising or not. 2010-04-15 PDF Paper
Variable PSO By playing with Balanced PSO, I have found this simple and effective variant: like SPSO 2007, but with variable ring topology and variable swarm size. You may try various rules to generate/delete particles 2011-01-20
2010-05-13
2010-04-15
C source (zipped)
C source (zipped)
C source (zipped)
Ice Cream Cone PSO Towards Standard PSO 201x? Almost as simple as SPSO 2007, but not biased 2010-11-09 C source (zipped)
Standard PSO Descriptions SPSO 2006, 2007,  2011, as available on the Particle Swarm Central. Unformal and formal descriptions, with a few comments 2011-07-12 PDF paper
FAQ about PSO Open Office presentation (ICSI 2011 seminar in Cergy) 2011-07-26 ODP file+TrueType font (zipped)
Let's build a PSO Open Office presentation (ICSI 2011 seminar in Cergy) 2011-07-26 ODP file+TrueType font (zipped)
Linear DE L-DE is a "geometrical" Differential Evolution variant, whose behaviour is not depending on the coordinate system. 2001-11-04 C source (zipped)

Other code line is

1. Single variable optimization:

sigavg = Optimize( "Signal average", 9, 2, 20, 1 );

Buy = Cross( MACD( 12, 26 ), Signal( 12, 26, sigavg ) );
Sell = Cross( Signal( 12, 26, sigavg ), MACD( 12, 26 ) );

2. Two-variable optimization (suitable for 3D charting)

per = Optimize("per", 2, 5, 50, 1 );
Level =
Optimize("level", 2, 2, 150, 4 );

Buy=Cross( CCI(per), -Level );
Sell = Cross( Level, CCI(per) );

3. Multiple (3) variable optimization:

mfast = Optimize( "MACD Fast", 12, 8, 16, 1 );
mslow =
Optimize("MACD Slow", 26, 17, 30, 1 );
sigavg =
Optimize( "Signal average", 9, 2, 20, 1 );


Buy = Cross( MACD( mfast, mslow ) , Signal( mfast, mslow, sigavg ) );
Sell = Cross( Signal( mfast, mslow, sigavg ), MACD( mfast, mslow ) );

After entering the formula just click on Optimize button in "Automatic Analysis" window. AmiBroker will start testing all possible combinations of optimization variables and report the results in the list. After optimization is done the list of result is presented sorted by the Net % profit. As you can sort the results by any column in the result list it is easy to get the optimal values of parameters for the lowest drawdown, lowest number of trades, largest profit factor, lowest market exposure and highest risk adjusted annual % return. The last columns of result list present the values of optimization variables for given test.

When you decide which combination of parameters suits your needs the best all you need to do is to replace the default values in optimize function calls with the optimal values. At current stage you need to type them by hand in the formula edit window (the second parameter of optimize function call).

Displaying 3D animated optimization charts

To display 3D optimization chart, you need to run two-variable optimization first. Two variable optimization needs a formula that has 2 Optimize() function calls. An example two-variable optimization formula looks like this:

per = Optimize("per", 2, 5, 50, 1 );
Level =
Optimize("level", 2, 2, 150, 4 );

Buy=Cross( CCI(per), -Level );
Sell = Cross( Level, CCI(per) );

After entering the formula you need to click "Optimize" button.

Once optimization is complete you should click on the drop down arrow on Optimize button and choose View 3D optimization graph. In a few seconds a colorful three-dimensional surface plot will appear in a 3D chart viewer window. An example 3D chart generated using above formula is shown below.

 

—————

Voltar