In this research first, a sequence of properties called delta is assigned to each prime number an... more In this research first, a sequence of properties called delta is assigned to each prime number and then examined. Deltas are only dependent on the distribution of prime numbers, so the results obtained for the delta distribution can be considered as a proxy for the distribution of prime numbers. The first observation was that these properties are not unique and different prime numbers may have the same value of delta of a given order. It was found that a small number of deltas cover a large portion of prime numbers, so by recognizing repetitive deltas, the next prime numbers can be predicted with a certain probability, but the most important observation of this study is the normal distribution of deltas. This research has not tried to justify the obtained observations and instead of answering the questions, it seeks to ask the right question.
We introduce Gravity, another algorithm for gradient-based optimization. In this paper, we explai... more We introduce Gravity, another algorithm for gradient-based optimization. In this paper, we explain how our novel idea change parameters to reduce the deep learning model’s loss. It has three intuitive hyper-parameters that the best values for them are proposed. Also, we propose an alternative to moving average. To compare the performance of the Gravity optimizer with two common optimizers, Adam and RMSProp, five standard datasets were trained on two VGGNet models with a batch size of 128 for 100 epochs. Gravity hyper-parameters did not need to be tuned for different models. As will be explained more in the paper, to investigate the direct impact of the optimizer itself on loss reduction no overfitting prevention technique was used. The obtained results show that the Gravity optimizer has more stable performance than Adam and RMSProp and gives greater values of validation accuracy for datasets with more output classes like CIFAR-100 (Fine).
In this research first, a sequence of properties called delta is assigned to each prime number an... more In this research first, a sequence of properties called delta is assigned to each prime number and then examined. Deltas are only dependent on the distribution of prime numbers, so the results obtained for the delta distribution can be considered as a proxy for the distribution of prime numbers. The first observation was that these properties are not unique and different prime numbers may have the same value of delta of a given order. It was found that a small number of deltas cover a large portion of prime numbers, so by recognizing repetitive deltas, the next prime numbers can be predicted with a certain probability, but the most important observation of this study is the normal distribution of deltas. This research has not tried to justify the obtained observations and instead of answering the questions, it seeks to ask the right question.
We introduce Gravity, another algorithm for gradient-based optimization. In this paper, we explai... more We introduce Gravity, another algorithm for gradient-based optimization. In this paper, we explain how our novel idea change parameters to reduce the deep learning model’s loss. It has three intuitive hyper-parameters that the best values for them are proposed. Also, we propose an alternative to moving average. To compare the performance of the Gravity optimizer with two common optimizers, Adam and RMSProp, five standard datasets were trained on two VGGNet models with a batch size of 128 for 100 epochs. Gravity hyper-parameters did not need to be tuned for different models. As will be explained more in the paper, to investigate the direct impact of the optimizer itself on loss reduction no overfitting prevention technique was used. The obtained results show that the Gravity optimizer has more stable performance than Adam and RMSProp and gives greater values of validation accuracy for datasets with more output classes like CIFAR-100 (Fine).
Uploads
Papers by Dariush Bahrami