Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
research-article

Multi-donor Neural Transfer Learning for Genetic Programming

Published: 24 November 2022 Publication History

Abstract

Genetic programming (GP), for the synthesis of brand new programs, continues to demonstrate increasingly capable results towards increasingly complex problems. A key challenge in GP is how to learn from the past so that the successful synthesis of simple programs can feed into more challenging unsolved problems. Transfer Learning (TL) in the literature has yet to demonstrate an automated mechanism to identify existing donor programs with high-utility genetic material for new problems, instead relying on human guidance. In this article we present a transfer learning mechanism for GP which fills this gap: we use a Turing-complete language for synthesis, and demonstrate how a neural network (NN) can be used to guide automated code fragment extraction from previously solved problems for injection into future problems. Using a framework which synthesises code from just 10 input-output examples, we first study NN ability to recognise the presence of code fragments in a larger program, then present an end-to-end system which takes only input-output examples and generates code fragments as it solves easier problems, then deploys selected high-utility fragments to solve harder ones. The use of NN-guided genetic material selection shows significant performance increases, on average doubling the percentage of programs that can be successfully synthesised when tested on two different problem corpora, compared with a non-transfer-learning GP baseline.
Appendices

A Language Operators

Our target programming language uses the operators shown below, and can be automatically translated to C/Java code (Java equivalents are shown for each operator).
B Corpus 1 Problems

C TL Metrics in Detail

In Section 4.4, we reported averaged metrics for TL; here, we report the per-problem metrics for time-to-threshold and transfer ratio, in Tables 12 and 13.
Table 12.
ProblemGenerations to thresholdTransfer Ratio
Add2,388.80.89
Append0.00.95
CumulativeAbsoluteSum2,346.41.00
CumulativeSum2,205.20.89
KeepEvenIndices1,888.80.66
ClipToMin1,932.40.70
RetainSecondHalf0.00.93
Sort2,277.60.98
Subtract1,996.80.71
Abs1,855.20.76
GreaterThan2,362.41.39
IndexParity774.80.46
FirstElementOnly0.00.79
Identity604.40.42
DivergentSequence1,846.80.72
Double1,322.80.55
ShiftRight0.00.92
ShiftRightLossy1,404.00.55
ShiftLeft948.80.52
ShiftLeftZeroPadded1,825.20.76
RetainFirstHalf0.00.92
LessThan2,718.81.35
Multiply1,818.80.69
Negative1,386.80.49
Pop1,257.60.49
KeepPositives1,499.20.63
KeepEvens2,314.40.92
ArrayLength0.00.24
ArrayToZero0.00.22
KeepNegatives1,856.40.74
KeepOdds1,783.60.94
Reverse1,504.80.58
CatToSelf2,111.20.81
CatZerosToSelf2,242.40.80
ClipToMax1,917.20.86
Table 12. TL Metrics for 25 Runs of the TL System Compared to the Baseline for the First Corpus
Generations to Threshold represents the average number of generations take for the TL system to outperform the baseline’s asymptotic performance (TL will gain performance after this point). Transfer Ratio is the average fitness of the TL system divided by the average fitness of the baseline (fitnesses less than \(-\)1 set to \(-\)1). (n = 25).
Table 13.
ProblemGenerations to thresholdTransfer Ratio
Square211.00.12
HollowSquare1,060.90.27
Parallelogram1,158.30.57
HollowParallelogram2,129.40.87
MirroredParallelogram1,830.60.75
MirroredHollowParallelogram1,968.10.72
RightTriangle152.00.07
HollowRightTriangle1,003.00.18
MirroredRightTriangle787.50.40
HollowMirroredRightTriangle1,605.40.59
InvertedRightTriangle271.00.14
HollowInvertedRightTriangle2,458.20.80
InvertedMirroredRightTriangle149.00.06
Inv’HollowMirr’RightTriangle792.70.24
IsoceleseTriangle0.00.89
HollowIsoceleseTriangle2,662.20.91
InvertedIsoceleseTriangle1,338.50.52
HollowInv’IsoceleseTriangle1,851.30.64
RectangleWithEmptyTrapezoid2,693.91.20
Inv’dRect’WithEmptyTrapezoid1,952.40.95
ObtuseTriangle1,832.90.74
HollowObtuseTriangle2,556.00.96
MirroredObtuseTriangle465.30.48
MirroredHollowObtuseTriangle2,140.60.78
InvertedObtuseTriangle0.00.95
HollowInvertedObtuseTriangle2,513.90.93
InvertedMir’ObtuseTriangle1,981.70.95
HollowMir’Inv’ObtuseTriangle2,273.90.95
VShape2,022.10.63
Trapezoid561.80.45
Table 13. TL Metrics for 25 Runs of the TL System Compared to the Baseline for the Second Corpus
Generations to Threshold represents the average number of generations take for the TL system to outperform the baseline’s asymptotic performance (TL will gain performance after this point). Transfer Ratio is the average fitness of the TL system divided by the average fitness of the baseline (fitnesses less than \(-\)1 set to \(-\)1). (n = 25).
Table 14.
ProblemB’ SuccessSuccessFisher Significance
Add7170.006
Append2815 \(1.742*10^{-4}\)
CumulativeAbsoluteSum150.090
CumulativeSum4140.004
KeepEvenIndices223 \(1.705*10^{-8}\)
ClipToMin217 \(2.547*10^{-5}\)
RetainSecondHalf030.125
Sort001.0
Subtract523 \(2.797*10^{-6}\)
Abs519 \(2.159*10^{-4}\)
GreaterThan250.166
IndexParity22290.011
FirstElementOnly323 \(1.182*10^{-7}\)
Identity25300.026
DivergentSequence019 \(2.671*10^{-8}\)
Double428 \(1.149*10^{-10}\)
ShiftRight016 \(9.720*10^{-7}\)
ShiftRightLossy828 \(7.062*10^{-8}\)
ShiftLeft224 \(3.695*10^{-9}\)
ShiftLeftZeroPadded824 \(3.350*10^{-5}\)
RetainFirstHalf09 \(9.680*10^{-4}\)
LessThan050.026
Multiply822 \(2.896*10^{-4}\)
Negative1527 \(6.811*10^{-4}\)
Pop126 \(9.342*10^{-12}\)
KeepPositives327 \(1.393*10^{-10}\)
KeepEvens001.0
ArrayLength25300.026
ArrayToZero25300.026
KeepNegatives922 \(7.320*10^{-4}\)
KeepOdds010.5
Reverse821 \(7.320*10^{-4}\)
CatToSelf021 \(1.791*10^{-9}\)
CatZerosToSelf224 \(3.695*10^{-9}\)
ClipToMax10180.025
Table 14. Full Breakdown of All Problems in the First Corpus, 1D Arrays, with the Values Shown Being Those Cases Which Passed Generalisation on 1,000 Examples
Fisher significance calculated for these two success rates per problem.
Table 15.
ProblemB’ SuccessSuccessFisher Significance
Square24300.011
HollowSquare2130 \(9.680*10^{-4}\)
Parallelogram020.25
HollowParallelogram020.25
MirroredParallelogram213 \(9.794*10^{-4}\)
MirroredHollowParallelogram260.111
RightTriangle23300.005
HollowRightTriangle1530 \(2.916*10^{-6}\)
MirroredRightTriangle830 \(4.135*10^{-10}\)
HollowMirroredRightTriangle427 \(9.721*10^{-10}\)
InvertedRightTriangle1830 \(6.181*10^{-5}\)
HollowInvertedRightTriangle324 \(2.739*10^{-8}\)
InvertedMirroredRightTriangle24300.011
Inv’HollowMirr’RightTriangle1430 \(9.720*10^{-7}\)
IsoceleseTriangle020.25
HollowIsoceleseTriangle070.005
InvertedIsoceleseTriangle624 \(2.981*10^{-6}\)
HollowInv’IsoceleseTriangle115 \(3.110*10^{-5}\)
RectangleWithEmptyTrapezoid220.5
Inv’dRect’WithEmptyTrapezoid070.005
ObtuseTriangle09 \(9.680*10^{-4}\)
HollowObtuseTriangle216 \(6.839*10^{-5}\)
MirroredObtuseTriangle001.0
MirroredHollowObtuseTriangle020.25
InvertedObtuseTriangle010.5
HollowInvertedObtuseTriangle040.058
InvertedMir’ObtuseTriangle030.125
HollowMir’Inv’ObtuseTriangle020.25
VShape227 \(1.543*10^{-11}\)
Trapezoid020.25
Table 15. Full Breakdown of All Problems in the Second Corpus, 2D Arrays, with the Values Shown Being Those Cases Which Passed Generalisation on 1,000 Examples
Fisher significance calculated for these two success rates per problem.
Table 16.
LineOperatorAs Code
1Literalvariables[6] = 1;
2Addvariables[7] = variables[0] + variables[6];
3Make Arrayarrays[1] = new int[vars[7]]
4Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
5Readvariables[5] = arrays[0][variables[2]];
6Writearrays[1][variables[2]] = variables[5];
7Endloop 
8Writearrays[1][variables[2]] = variables[1];
FragmentSuccess Rate 
13% 
\({1,2}\)27% 
\({1,4}\)10% 
40% 
\({4,5}\)0% 
\({4,6}\)0% 
\({4,8}\)0% 
Table 16. Fragments Assessed from Program “Append”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 17.
LineOperatorAs Code
1Make Arrayarrays[1] = new int[vars[0]]
2Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
3Literalvariables[5] = –1;
4Readvariables[3] = arrays[0][variables[2]];
5Conditionif (variables[3] \(\gt\)0)
6Elseelse
7Multiplyvariables[3] = variables[3] * variables[5];
8Endloop 
9Addvariables[4] = variables[4] + variables[3];
10Writearrays[1][variables[2]] = variables[4];
11Endloop 
FragmentSuccess Rate 
10% 
\({1,2}\)0% 
\({1,3}\)0% 
\({1,5}\)0% 
\({1,6}\)0% 
20% 
\({2,3}\)0% 
\({2,4}\)3% 
\({2,5}\)3% 
\({2,6}\)0% 
30% 
\({3,5}\)0% 
\({3,6}\)0% 
\({3,7}\)0% 
50% 
\({5,6}\)3% 
60% 
Table 17. Fragments Assessed from Program “Cumulative Absolute Sum”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 18.
LineOperatorAs Code
1Literalvariables[4] = 2;
2Make Arrayarrays[1] = new int[vars[0]]
3Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
4Readvariables[3] = arrays[0][variables[2]];
5Modulovariables[5] = variables[3] % variables[4];
6Conditionif (variables[5]==variables[6])
7Writearrays[1][variables[2]] = variables[3];
8Endloop 
9Endloop 
FragmentSuccess Rate 
10% 
\({1,2}\)6% 
\({1,3}\)3% 
\({1,5}\)3% 
23% 
\({2,3}\)0% 
30% 
\({3,4}\)0% 
\({3,7}\)0% 
Table 18. Fragments Assessed from Program “Keep Evens”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 19.
LineOperatorAs Code
1Literalvariables[6] = 2;
2Dividevariables[3] = variables[0] / variables[6];
3Make Arrayarrays[1] = new int[vars[3]]
4Loopfor (variables[2]=0;variables[2] \(\lt\)variables[3];variables[2]++)
5Readvariables[5] = arrays[0][variables[2]];
6Writearrays[1][variables[2]] = variables[5];
7Endloop 
FragmentSuccess Rate 
10% 
\({1,2}\)13% 
Table 19. Fragments Assessed from Program “Retain First Half”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 20.
LineOperatorAs Code
1Literalvariables[7] = 2;
2Make Arrayarrays[1] = new int[vars[0]]
3Loopfor (variables[2]=0;variables[2]<variables[0];variables[2]++)
4Subtractvariables[6] = variables[0] - variables[2];
5Subtractvariables[6] = variables[6] - variables[7];
6Readvariables[5] = arrays[0][variables[6]];
7Writearrays[1][variables[2]] = variables[5];
8Endloop 
FragmentSuccess Rate 
163% 
\({1,2}\)80% 
\({1,3}\)77% 
273% 
\({2,3}\)60% 
363% 
\({3,4}\)80% 
\({3,7}\)77% 
Table 20. Fragments Assessed from Program “Reverse”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 21.
LineOperatorAs Code
1Literalvariables[6] = 1;
2Addvariables[8] = variables[0] + variables[6];
3Make Arrayarrays[1] = new int[vars[8]]
4Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
5Addvariables[7] = variables[2] + variables[6];
6Readvariables[5] = arrays[0][variables[2]];
7Writearrays[1][variables[7]] = variables[5];
8Endloop 
FragmentSuccess Rate 
13% 
\({1,2}\)13% 
\({1,4}\)20% 
40% 
\({4,6}\)0% 
Table 21. Fragments Assessed from Program “Shift Right”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 22.
LineOperatorAs Code
1Literalvariables[6] = 2;
2Addvariables[8] = variables[0] + variables[6];
3Make Arrayarrays[1] = new int[vars[0]]
4Subtractvariables[9] = variables[0] - variables[6];
5Loopfor (variables[2]=0;variables[2]<variables[9];variables[2]++)
6Addvariables[7] = variables[2] + variables[6];
7Readvariables[5] = arrays[0][variables[2]];
8Writearrays[1][variables[7]] = variables[5];
9Endloop 
FragmentSuccess Rate 
180% 
\({1,2}\)73% 
\({1,3}\)63% 
\({1,4}\)63% 
367% 
Table 22. Fragments Assessed from Program “Shift Right Lossy”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 23.
LineOperatorAs Code
1Literalvariables[5] = 1;
2Subtractvariables[1] = variables[0] - variables[5];
3Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
4Loopfor (variables[3]=0;variables[3] \(\lt\)variables[1];variables[3]++)
5Addvariables[6] = variables[3] + variables[5];
6Readvariables[4] = arrays[0][variables[3]];
7Readvariables[7] = arrays[0][variables[6]];
8Subtractvariables[8] = variables[4] - variables[7];
9Conditionif (variables[8] \(\gt\)0)
10Writearrays[0][variables[6]] = variables[4];
11Writearrays[0][variables[3]] = variables[7];
12Endloop 
13Endloop 
14Endloop 
15Make Arrayarrays[1] = new int[vars[0]]
16Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
17Readvariables[5] = arrays[0][variables[2]];
18Writearrays[1][variables[2]] = variables[5];
19Endloop 
FragmentSuccess Rate 
10% 
\({1,2}\)0% 
\({1,3}\)0% 
\({1,8}\)0% 
\({1,15}\)0% 
\({1,16}\)0% 
30% 
\({3,8}\)0% 
\({3,15}\)0% 
\({3,16}\)0% 
\({3,17}\)0% 
40% 
\({4,6}\)0% 
80% 
\({8,9}\)0% 
\({8,15}\)0% 
\({8,16}\)0% 
150% 
\({15,16}\)0% 
160% 
Table 23. Fragments Assessed from Program “Sort”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 24.
LineOperatorAs Code
1Make 2D Arraynew 2DArray(size=variables[0]);
2Literalvariables[6] = 2;
3Dividevariables[4] = variables[0] / variables[6];
4Loopfor (variables[2]=0;variables[2] \(\lt\)variables[4];variables[2]++)
5Loopfor (variables[3]=0;variables[3] \(\lt\)variables[4];variables[3]++)
6Addvariables[7] = variables[2] + variables[3];
7Write to 2Darray[variables[7][variables[3]]=1;
8Endloop 
9Endloop 
FragmentSuccess Rate 
223% 
\({2,3}\)30% 
Table 24. Fragments Assessed from Program “Mirrored Parallelogram”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 25.
LineOperatorAs Code
1Make 2D Arraynew 2DArray(size=variables[0]);
2Literalvariables[6] = 2;
3Dividevariables[4] = variables[0] / variables[6];
4Loopfor (variables[2]=0;variables[2] \(\lt\)variables[4];variables[2]++)
5Addvariables[5] = variables[2] + variables[4];
6Write to 2Darray[variables[5][variables[10]]=1;
7Write to 2Darray[variables[2][variables[4]]=1;
8Subtractvariables[6] = variables[4] - variables[2];
9Write to 2Darray[variables[2][variables[6]]=1;
10Write to 2Darray[variables[5][variables[6]]=1;
11Endloop 
12Literalvariables[8] = 1;
13Subtractvariables[7] = variables[0] - variables[8];
14Write to 2Darray[variables[7][variables[10]]=1;
FragmentSuccess Rate 
213% 
\({2,3}\)60% 
\({2,12}\)10% 
1213% 
\({12,13}\)40% 
Table 25. Fragments Assessed from Program “Mirrored Hollow Parallelogram”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 26.
LineOperatorAs Code
1Make 2D Arraynew 2DArray(size=variables[0]);
2Literalvariables[1] = 1;
3Subtractvariables[4] = variables[0] - variables[1];
4Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
5Write to 2Darray[variables[2][variables[4]]=1;
6Write to 2Darray[variables[5][variables[2]]=1;
7Write to 2Darray[variables[2][variables[2]]=1;
8Endloop 
FragmentSuccess Rate 
280% 
\({2,3}\)90% 
\({2,4}\)80% 
490% 
\({4,6}\)63% 
\({4,7}\)87% 
Table 26. Fragments Assessed from Program “Hollow Right Triangle”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 27.
LineOperatorAs Code
1Make 2D Arraynew 2DArray(size=variables[0]);
2Literalvariables[3] = 1;
3Subtractvariables[4] = variables[0] - variables[3];
4Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
5Write to 2Darray[variables[2][variables[4]]=1;
6Write to 2Darray[variables[4][variables[2]]=1;
7Subtractvariables[5] = variables[0] - variables[2];
8Subtractvariables[5] = variables[5] - variables[3];
9Write to 2Darray[variables[2][variables[5]]=1;
10Endloop 
FragmentSuccess Rate 
267% 
\({2,3}\)93% 
\({2,4}\)80% 
467% 
\({4,7}\)80% 
Table 27. Fragments Assessed from Program “Hollow Mirrored Right Triangle”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 28.
LineOperatorAs Code
1Make 2D Arraynew 2DArray(size=variables[0]);
2Literalvariables[4] = 2;
3Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
4Multiplyvariables[6] = variables[2] * variables[4];
5Subtractvariables[5] = variables[0] - variables[6];
6Loopfor (variables[3]=0;variables[3] \(\lt\)variables[5];variables[3]++)
7Addvariables[7] = variables[3] + variables[2];
8Write to 2Darray[variables[7][variables[2]]=1;
9Endloop 
10Endloop 
FragmentSuccess Rate 
223% 
\({2,3}\)20% 
320% 
Table 28. Fragments Assessed from Program “Inverted Isosceles Triangle”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
Table 29.
LineOperatorAs Code
1Make 2D Arraynew 2DArray(size=variables[0]);
2Literalvariables[7] = 1;
3Literalvariables[4] = 2;
4Dividevariables[5] = variables[0] / variables[4];
5Loopfor (variables[2]=0;variables[2] \(\lt\)variables[0];variables[2]++)
6Loopfor (variables[3]=0;variables[3] \(\lt\)variables[5];variables[3]++)
7Subtractvariables[8] = variables[0] - variables[5];
8Dividevariables[8] = variables[8] / variables[4];
9Subtractvariables[8] = variables[8] - variables[3];
10Addvariables[9] = variables[8] + variables[7];
11Conditionif (variables[9] \(\gt\)0)
12Subtractvariables[9] = variables[2] - variables[8];
13Conditionif (variables[9] \(\gt\)0)
14Subtractvariables[9] = variables[0] - variables[8];
15Subtractvariables[9] = variables[9] - variables[2];
16Conditionif (variables[9] \(\gt\)0)
17Write to 2Darray[variables[2][variables[3]]=1;
18Endloop 
19Endloop 
20Endloop 
21Endloop 
22Endloop 
FragmentSuccess Rate 
210% 
\({2,3}\)10% 
\({2,5}\)10% 
33% 
\({3,4}\)0% 
\({3,5}\)10% 
53% 
Table 29. Fragments Assessed from Program “Trapezoid”
Program’s code listed, in C-like format, with operators listed ahead of each line for each of readability. Fragments are then described, in reference to lines used followed by success rate using fragment as GP guidance. (n = 30).
D Fisher Significance for Generalised Results

E Full Breakdown of Fragments Evaluated in Baseline GP Fragment Suggestion Experiments

The tables in this section show each fragment evaluated by the exhaustive fragment testing process. Each fragment is at most two lines, and has no variables which depend on being set in lines outside the fragment (therefore, the fragment stands alone in terms of functionality). The source code of the ground-truth implementation is given, firstly as simply the operator used on that line, and secondly in a C-like fashion (excluding braces). This C-like fashion is a programmatically generated translation of the source code of the custom language implementation, provided for ease of readability (due to the difficult-to-parse structure of the custom language). We then refer to the lines in this source code by line number. Fragments cannot contain end-of-block operators (used to indicate the end point of blocks started by the flow-control operators loop and conditional), nor can they contain the initial definition of the 2D canvas.

References

[1]
Qurrat Ul Ain, Harith Al-Sahaf, Bing Xue, and Mengjie Zhang. 2020. A genetic programming approach to feature construction for ensemble learning in skin cancer detection. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference.Association for Computing Machinery, 1186–1194. DOI:DOI:
[2]
Matej Balog, Alexander Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. 2017. DeepCoder: Learning to write programs. In Proceedings of the ICLR’17.Retrieved from https://www.microsoft.com/en-us/research/publication/deepcoder-learning-write-programs/.
[3]
Peter L. Bartlett, Nick Harvey, Christopher Liaw, and Abbas Mehrabian. 2019. Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. Journal of Machine Learning Research 20, 63 (2019), 1–17. Retrieved from http://jmlr.org/papers/v20/17-612.html.
[4]
Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, and Pushmeet Kohli. 2018. Leveraging grammar and reinforcement learning for neural program synthesis. In ICLR.
[5]
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating Large Language Models Trained on Code. (2021). arXiv:2107.03374 [cs.LG].
[6]
Qi Chen, Bing Xue, and Mengjie Zhang. 2022. Genetic Programming for Instance Transfer Learning in Symbolic Regression. IEEE Transactions on Cybernetics 52, 1 (Jan. 2022), 25–38.
[7]
X. Chen, C. Liu, and D. Song. 2019. Execution-guided neural program synthesis. In Proceedings of the ICLR.
[8]
Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. 2017. RobustFill: Neural program learning under noisy I/O. In Proceedings of the 34th International Conference on Machine Learning—Volume 70.JMLR.org, 990–998.
[9]
Stephane Doncieux, Alban Laflaquière, and Alexandre Coninx. 2019. Novelty search: A theoretical perspective. In Proceedings of the Genetic and Evolutionary Computation Conference.Association for Computing Machinery, 99–106. DOI:DOI:
[10]
Stefan Forstenlechner, David Fagan, Miguel Nicolau, and Michael O’Neill. 2017. A grammar design pattern for arbitrary program synthesis problems in genetic programming. In Proceedings of the EuroGP.
[11]
Stefan Forstenlechner, David Fagan, Miguel Nicolau, and Michael O’Neill. 2018. Extending program synthesis grammars for grammar-guided genetic programming. In Proceedings of the PPSN.
[12]
Richard Forsyth. 1981. BEAGLE A darwinian approach to pattern recognition. Kybernetes 10, 3 (1981), 159–166. DOI:DOI:
[13]
David E. Goldberg and Jon Richardson. 1987. Genetic algorithms with sharing for multimodal function optimization. In Genetic algorithms and their applications: Proceedings of the Second International Conference on Genetic Algorithms, Vol. 4149. Hillsdale, NJ: Lawrence Erlbaum.
[14]
Thomas Helmuth, Edward Pantridge, Grace Woolson, and Lee Spector. 2020. Genetic source sensitivity and transfer learning in genetic programming.In Proceedings of the ALIFE 2020: The 2020 Conference on Artificial Life.303–311. DOI:DOI:
[15]
Thomas Helmuth and Lee Spector. 2015. General program synthesis benchmark suite. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation.Association for Computing Machinery, 1039–1046. DOI:DOI:
[16]
Erik Hemberg, Jonathan Kelly, and Una-May O’Reilly. 2019. On domain knowledge and novelty to improve program synthesis performance with grammatical evolution. In Proceedings of the Genetic and Evolutionary Computation Conference.Association for Computing Machinery, 1039–1046. DOI:DOI:
[17]
Wojciech Jaskowski, Krzysztof Krawiec, and Bartosz Wieloch. 2007. Knowledge reuse in genetic programming applied to visual learning. In Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation.Association for Computing Machinery, 1790–1797. DOI:DOI:
[18]
Maarten Keijzer, Conor Ryan, and Mike Cattolico. 2004. Run transferable libraries — learning functional bias in problem domains. In Proceedings of the Genetic and Evolutionary Computation – GECCO 2004. Kalyanmoy Deb (Ed.), Springer, Berlin, 531–542.
[19]
Stephen Kelly and Malcolm Heywood. 2015. Knowledge transfer from keepaway soccer to half-field offense through program symbiosis. 1143–1150. DOI:DOI:
[20]
Stephen Kelly and Malcolm I. Heywood. 2018. Discovering agent behaviors through code reuse: Examples from half-field offense and Ms. pac-man. IEEE Transactions on Games 10, 2 (2018), 195–208. DOI:DOI:
[21]
K. E. Kinnear. 1994. Fitness landscapes and difficulty in genetic programming. In Proceedings of the 1st IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence. 142–147.
[22]
Joel Lehman and Kenneth Stanley. 2010. Efficiently evolving programs through the search for novelty. In Proceedings of the 12th Annual Genetic and Evolutionary Computation Conference.837–844. DOI:DOI:
[23]
Brad L. Miller and David E. Goldberg. 1995. Genetic Algorithms, Tournament Selection, and the Effects of Noise. Complex Systems 9 (1995), 193–212.
[24]
Ícaro Marcelino Miranda, Claus Aranha, and Marcelo Ladeira. 2019. Classification of EEG signals using genetic programming for feature construction. In Proceedings of the Genetic and Evolutionary Computation Conference.Association for Computing Machinery, 1275–1283. DOI:DOI:
[25]
Brandon Muller, Harith Al-Sahaf, Bing Xue, and Mengjie Zhang. 2019. Transfer learning: A building block selection mechanism in genetic programming for symbolic regression. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 350–351. DOI:
[26]
Luis Muñoz, Leonardo Trujillo, and Sara Silva. 2020. Transfer learning in constructive induction with Genetic Programming. Genetic Programming and Evolvable Machines 21, 4 (01 Dec 2020), 529–569.
[27]
Gonzalo Navarro. 2001. A guided tour to approximate string matching. ACM Computing Surveys 33, 1(2001), 31–88. DOI:DOI:
[28]
Damien O’Neill, Harith Al-Sahaf, Bing Xue, and Mengjie Zhang. 2017. Common subtrees in related problems: A novel transfer learning approach for genetic programming. 1287–1294. DOI:DOI:
[29]
Edward Pantridge and Lee Spector. 2020. Code building genetic programming. In Proceedings of the 2020 Genetic and Evolutionary Computation Conference.994–1002. DOI:DOI:
[30]
Joseph Renzullo, Westley Weimer, Melanie Moses, and Stephanie Forrest. 2018. Neutrality and epistasis in program space. In Proceedings of the 4th International Workshop on Genetic Improvement Workshop.Association for Computing Machinery, 1–8. DOI:DOI:
[31]
Rishabh Singh and Pushmeet Kohli. 2017. AP: Artificial programming. In Proceedings of the2nd Summit on Advances in Programming Languages. 1–12. DOI:DOI:
[32]
Sunbeom So and Hakjoo Oh. 2018. Synthesizing pattern programs from examples. In Proceedings of the 27th International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 1618–1624. DOI:DOI:
[33]
Saurabh Srivastava, Sumit Gulwani, and Jeffrey S. Foster. 2010. From program verification to program synthesis. In Proceedings of the 37th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages.Association for Computing Machinery, 313–326. DOI:DOI:
[34]
Saurabh Srivastava, Sumit Gulwani, and Jeffrey S. Foster. 2010. From program verification to program synthesis. SIGPLAN Notices 45, 1(2010), 313–326. DOI:DOI:
[35]
Milad Taleby Ahvanooey, Qianmu Li, Ming Wu, and Shuo Wang. 2019. A Survey of Genetic Programming and Its Applications. KSII Transactions on Internet and Information Systems 13, 4 (04 2019), 1765–1793.
[36]
Matthew E. Taylor and Peter Stone. 2011. An introduction to inter-task transfer for reinforcement learning. AI Magazine 32, 1 (2011), 15–34.
[37]
Leonardo Vanneschi and Steven Gustafson. 2009. Using crossover based similarity measure to improve genetic programming generalization ability. In Proceedings of the 11th Annual Conference on Genetic and Evolutionary Computation.Association for Computing Machinery, 1139–1146. DOI:DOI:
[38]
Leonardo Vanneschi and Riccardo Poli. 2012. Genetic Programming — Introduction, Applications, Theory and Open Issues. Springer, Berlin. DOI:DOI:
[39]
Alexander Wild and Barry Porter. 2019. General program synthesis using guided corpus generation and automatic refactoring. In Proceedings of the Search-based Software Engineering.Shiva Nejati and Gregory Gay (Eds.), Lecture Notes in Computer Science, Springer-Verlag, 89–104. DOI:DOI:
[40]
Amit Zohar and Lior Wolf. 2018. Automatic program synthesis of long programs with a learned garbage collector. In Proceedings of the Advances in Neural Information Processing Systems 31. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), Curran Associates, Inc., 2094–2103.

Index Terms

  1. Multi-donor Neural Transfer Learning for Genetic Programming
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Evolutionary Learning and Optimization
    ACM Transactions on Evolutionary Learning and Optimization  Volume 2, Issue 4
    December 2022
    105 pages
    EISSN:2688-3007
    DOI:10.1145/3572824
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 24 November 2022
    Online AM: 14 September 2022
    Accepted: 22 August 2022
    Revised: 01 August 2022
    Received: 17 June 2021
    Published in TELO Volume 2, Issue 4

    Check for updates

    Author Tags

    1. Genetic programming
    2. neural networks
    3. transfer learning
    4. genetic algorithms

    Qualifiers

    • Research-article
    • Refereed

    Funding Sources

    • Leverhulme Trust Research

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 176
      Total Downloads
    • Downloads (Last 12 months)35
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 01 Nov 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media