News

Research news

Container ports are important hubs in the global trade network. They have seen enormous growth in their roles over recent years and operational demands are always changing, especially as more sophisticated logistics systems emerge. A study in the International Journal of Shipping and Transport Logistics sheds new light on how the changes in this sector are affecting port efficiency, the focus is on the different types of container activities.

Fernando González-Laxe of the University Institute of Maritime Studies, A Coruña University and Xose Luis Fernández and Pablo Coto-Millán of the Universidad de Cantabria, Santander, Spain, explain that container ports handle cargo that is packed in standardized shipping containers, the big metal boxes with which many people are familiar commonly transported en masse on vast sea-going vessels, unloaded port-side, and loaded on to trains and road transporters for their onward journey. The increasing size of ships used for transporting these containers, some of which can carry up to 25000 TEUs (twenty-foot equivalent units, the containers), means there is increasing pressure on ports to increase their capacity. As such, there is a lot of ongoing effort to automate processes and optimize port operations to allow the big container ports to remain viable and competitive.

The team used Data Envelopment Analysis (DEA) to evaluate the efficiency of container ports by comparing the input and output of their operations. The focused on ten major Spanish container ports – among them the major ports of Algeciras, Barcelona, and Valencia – in order to understand how various types of container activities – import/export, transshipment, and cabotage (coastal shipping) – influence port performance.

One of the key findings from the study is the relationship between port efficiency and the types of container activities handled. The team found that there is an inverted U-shape relationship: ports that balanced transshipment (transferring containers between ships at intermediate points) with import/export activities tended to perform better than those that specialized in only one type of activity. This suggests that a diversified approach to container activities may enhance port efficiency.

The work suggests that by adopting a balanced approach to their activities, container ports could boost efficiency and reinforce their role in the global supply chain.

González-Laxe, F., Fernández, X.L. and Coto-Millán, P. (2024) 'Transhipment: when movement matters in port efficiency', Int. J. Shipping and Transport Logistics, Vol. 18, No. 4, pp.383–402.
DOI: 10.1504/IJSTL.2024.140429

Dr Dolittle eat your heart out! Researchers writing in the International Journal of Engineering Systems Modelling and Simulation demonstrate how a trained algorithm can identify the trumpeting calls of elephants, distinguishing them from human and other animals sounds in the environment. The work could improve safety for villagers and help farmers protect their crops and homesteads from wild elephants in India.

T. Thomas Leonid of the KCG College of Technology and R. Jayaparvathyof the SSN College of Engineering in Chennai, India, explain how conflicts between people and elephants are becoming increasingly common, especially in areas where human activity has encroached on natural elephant habitats. This is particularly true where agriculture meets forested land. These conflicts are not just an environmental concern, they pose a thread to human life and livelihoods.

In India, wild elephants are responsible for more human fatalities than large predators. Their presence also leads to the destruction of crops and infrastructure, which creates a heavy financial burden on rural communities. Of course, the elephants are not to blame, they are wild animals, doing their best to survive. The root causes lie in habitat destruction due to human activities such as mining, dam construction, and increasing encroachment into forests for resources like firewood and water.

As such, finding effective solutions to mitigate human-elephant encounters is becoming increasingly urgent. The team suggests that a way to reduce the number of tragic and costly outcomes would be to put in place an early-warning system. Such a system would recognise elephant behaviour from their vocalisations and allow farmers and others to avoid the elephants or perhaps even safely divert an incoming herd before it becomes a serious and damaging hazard.

The researchers compared several machine learning models to determine which one best detects and classifies elephant sounds. The models tested included Support Vector Machines (SVM), K-nearest Neighbours (KNN), Naive Bayes, and Convolutional Neural Networks (CNN). They trained each of these algorithms on a dataset of 450 animal sound samples from five different species. One of the key steps in the process is feature extraction, which involves identifying distinctive characteristics within the audio signals, such as frequency, amplitude, and the temporal structure of the sounds. These features are then used to train the machine learning models to recognise elephant calls.

The most accurate was the Convolutional Neural Network (CNN), a deep learning model that automatically learns complex features from raw data. CNNs are particularly well-suited for this type of task due to their ability to recognise intricate patterns in sound data. The CNN had a high accuracy of 84 percent, far better than the models. This might be improved, but is sufficiently accurate to have potential for a reliable, automated system to detect elephants on the march that might be heading towards homes and farms.

Leonid, T.T. and Jayaparvathy, R. (2024) 'Elephant sound classification using machine learning algorithms for mitigation strategy', Int. J. Engineering Systems Modelling and Simulation, Vol. 15, No. 5, pp.248–252.
DOI: 10.1504/IJESMS.2024.140803

Research in the International Journal of Biometrics introduces a method to improve the accuracy and speed of dynamic emotion recognition using a convolutional neural network (CNN) to analyse faces. The work undertaken by Lanbo Xu of Northeastern University in Shenyang, China, could have applications mental health, human-computer interaction, security, and other areas.

Facial expressions are a major part of non-verbal communication, providing clues about an individual's emotional state. Until now, emotion recognition systems have used static images, which means they cannot capture the changing nature of emotions as they play out over a person's face during a conversation, interview or other interaction. Xu's work addresses this by focusing on video sequences. The system can track changing facial expressions over a series of video frames and then offer a detailed analysis of how a person's emotions unfold in real time.

However, prior to analysis, the system applies an algorithm, the "chaotic frog leap algorithm" to sharpen key facial features. The algorithm mimics the foraging behaviour of frogs to find optimal parameters in the digital images. The CNN trained on a dataset of human expressions is the most important part of the approach, allowing Xu to process visual data by recognizing patterns in new images that intersect with the training data. By analysing several frames from video footage, the system can capture movements of the mouth, eyes, and eyebrows, which are often subtle but important indicators of emotional changes.

Xu reports an accuracy of up to 99 percent, with the system providing an ouput in a fraction of a second. Such precision and speed is ideal for real-time use in various areas where detecting emotion might be useful without the need for subjective assessment by another person or team. Its potential applications lie in improving user experiences with computer interactions where the computer can respond appropriately to the user's emotional state, such as frustration, anger, or boredom.

The system might be useful in screening people for emotional disorders without initial human intervention. It could also be used in enhancing security systems allowing access to resources but only to those in a particular emotional state and barring entry to an angry or upset person, perhaps. The same system could even be used to identify driver fatigue on transport systems or even in one's own vehicle. The entertainment and marketing sectors might also see applications where understanding emotional responses could improve content development, delivery, and consumer engagement.

Xu, L. (2024) 'Dynamic emotion recognition of human face based on convolutional neural network', Int. J. Biometrics, Vol. 16, No. 5, pp.533–551.
DOI: 10.1504/IJBM.2024.140785

As computer network security threats continue to grow in complexity, the need for more advanced security systems is obvious. Indeed, traditional methods of intrusion detection have struggled to keep pace with the changes and so researchers are looking to explore alternatives. A study in the International Journal of Computational Systems Engineering suggests that the integration of data augmentation and ensemble learning methods could be used to improve the accuracy of intrusion detection systems.

Xiaoli Zhou of the School of Information Engineering at Sichuan Top IT Vocational Institute in Chengdu, China, has focused on a Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP). This is an advanced version of the standard machine learning model and can create realistic data through a process of competition between two neural networks. Conventional GANs often suffer from unstable training and pattern collapse, where the model fails to generate diverse data. The WGAN-GP variant mitigates these issues by incorporating a gradient penalty, according to the research, this helps to stabilize the training process and improve the quality of the generated data. It can then be used effectively to simulate network traffic for intrusion detection with a view to blocking hacking attempts.

There is the potential to enhance the WGAN-GP data quality still further by combining it with a stacking learning module. Stacking is an ensemble learning technique that involves training multiple models and then combining their outputs using a meta-classifier. In Zhou's work, the stacking module integrates the predictions from several WGAN-GP models to allow them to be classified as normal or intrusive.

The approach was tested against well-established data augmentation methods, including the Synthetic Minority Over-sampling Technique (SMOTE), Adaptive Synthetic Sampling (ADASYN), and a simple version of WGAN. The results showed that the WGAN-GP-based model had an accuracy rate of almost 90%, which is better than the scores for the other techniques tested. The model can thus distinguish between legitimate and potentially harmful network activity effectively. Optimisation might improve the accuracy and allow the system to be used to protect governments, corporations, individual, and others at risk from network security threats.

Zhou, X. (2024) 'Research on network intrusion detection model that integrates WGAN-GP algorithm and stacking learning module', Int. J. Computational Systems Engineering, Vol. 8, No. 6, pp.1–10.
DOI: 10.1504/IJCSYSE.2024.140760

Science-based university spin-offs, especially in the biotech sector, play an important role in transforming cutting-edge academic science into marketable technological products. However, such start-ups face lots of challenges that can be very different from those encountered by conventional startups. Research in the International Journal of Technology Management has looked at the complexities and potential of such spin-offs and sheds new light on the role played by the academic scientists involved in the process and how launch timing can make all the difference.

Andrew Park of the University of Victoria, Canada, and colleagues explain that unlike typical start-ups, which might bring a product to market relatively quickly, new biotechnology companies often have long periods of financial investment and require lengthy development, testing, and regulatory periods for their products. This is particularly true in drug development, where the path from the laboratory bench to the marketplace can span a decade or more, not least because of the need for extensive clinical trials and the completion of regulatory requirements. As such, there is often a greater need to plan strategically and to use resources more effectively even before the spin-off company is officially launched.

Many laboratory scientists make the leap from bench to business, some with much greater success than others. The successful scientist-entrepreneurs bring with them their research acumen and intellectual property, but also various intangible assets that can make or break a spin-off company. Among those intangibles might be research publications and patents, networks of contacts and collaborators, and access to funding opportunities that might be unavailable to companies with no direct academic links.

The paper's case studies of three biotechnology spin-offs within the British Columbia innovation ecosystem suggests that the value of intangible assets is usually only realised when strong entrepreneurial capabilities are available to the start-up company. These capabilities are not just about business acumen but also an understanding of how to align the technology with market needs, protect intellectual property effectively, and mentor the founding team to reach biotech commercialization successfully. Critically, the timing of a company launch can correlate strongly with success or failure, the researchers found.

Park, A., Goudarzi, A., Yaghmaie, P., Thomas, V.J. and Maine, E. (2024) 'The role of pre-formation intangible assets in the endowment of science-based university spin-offs', Int. J. Technology Management, Vol. 96, No. 4, pp.230–260.
DOI: 10.1504/IJTM.2024.140712

A multi-centre research team writing in the International Journal of Metadata, Semantics and Ontologies discusses how they hope to fill a significant gap in the documentation and sharing of research data by focusing on "contextual metadata." The researchers explain that traditionally, research metadata has usually been about research outputs, such as publications or datasets. The new stance considers the detailed information about the research process, such as how the data was generated, the techniques used, and the specific conditions under which the research was conducted.

The project considered six research domains across the life sciences, social science, and the humanities. Semi-structured interviews and literature review allowed the team to unravel how researchers in each domain manage this kind of contextual metadata. They found that although a considerable amount of such metadata is available, it is often implicit and scattered across various documentation fields. This fragmentation makes it difficult to identify and use the information effectively.

The team thus suggests that there is a need for a standardized framework for contextual metadata that could be used across all disciplines. Such a framework would support future work to look at the replicability and reproducibility of research, which are important in scientific integrity and validation. Replicability refers to the ability to duplicate a study's results under the same conditions, while reproducibility involves obtaining consistent results using the same datasets and methods.

Additionally, a standardized approach to contextual metadata could reduce research waste and even help reduce research misconduct by providing a clearer and more consistent way to document research processes. However, there remain many challenges because of the diverse nature of research practices across different disciplines. Differences in funding models, regulatory requirements, and methods mean that a universal framework might not be directly applicable to all fields. As such, the team has proposed a generic framework that recognize the need for domain-specific adaptations.

Ohmann, C., Panagiotopoulou, M., Canham, S., Holub, P., Majcen, K., Saunders, G., Fratelli, M., Tang, J., Gribbon, P., Karki, R., Kleemola, M., Moilanen, K., Broeder, D., Daelemans, W. and Fivez, P. (2023) 'Proposal for a framework of contextual metadata in selected research infrastructures of the life sciences and the social sciences & humanities', Int. J. Metadata Semantics and Ontologies, Vol. 16, No. 4, pp.261–277.
DOI: 10.1504/IJMSO.2023.140695

The COVID-19 pandemic not only gave us a global health crisis but also an infodemic, a term coined by the World Health Organization (WHO) to describe the overwhelming flood of information – both accurate and misleading – that inundated media channels. This information complicated the public understanding and response to the pandemic as people struggled to separate fact from fiction.

Researchers writing in the International Journal of Advanced Media and Communication suggest that a lot of attention has been paid to tracking and mitigating the spread of misinformation, but there has been less focus on the characteristics of the messages and sources that allow information to spread. This gap in the research literature has implications for how we might develop better strategies to counteract misinformation, particularly in times of crisis.

Ezgi Akar of the University of Wisconsin, USA, looked at social media updates, "Tweets" as they were once referred on the Twitter microblogging platform. Twitter has since been rebranded as "X". At the time of the pandemic, Twitter had famously risen to the point where it was a powerful tool that could shape public discourse and at the time played an important role in the dissemination of information and social interaction, and, unfortunately, the spread of misinformation.

The research hoped to reveal how the content of a given update and the credibility of its source might contribute to its spread, or reach, across the social media platform, and beyond. The aim would be to see what factors might then be influenced to reduce the spread of false information, often referred to as fake news in the vernacular of the time.

Akar's model used three main theoretical frameworks: the Undeutsch hypothesis, which examines the credibility of statements; the four-factor theory, which looks at the various aspects that influence how believable a message is; and source credibility theory, which explores how the perceived reliability of a source affects the dissemination of information. He then used the model to analyse a dataset of tweets, both true and false to look for patterns.

The findings of the study reveal that while the content of an update – such as the use of extreme sentiments, external links, and media, such as photos and videos – affects the likelihood of the update being "liked" or shared "retweeted", the credibility of the source has more effect on how widely the information spreads. This suggests that users will engage more with content from seemingly credible sources, even if the content itself is not particularly compelling.

An additional finding, that updates in all capital letters were more likely to be shared if they were providing true information. Usually, messages written in all capital letters are perceived as aggressive, akin to shouting, or naïve. But, "all caps" in the case of an important and urgent message seems to override typical user behaviour in certain situations.

Akar, E. (2024) 'Unmasking an infodemic: what characteristics are fuelling misinformation on social media?', Int. J. Advanced Media and Communication, Vol. 8, No. 1, pp.53–76.
DOI: 10.1504/IJAMC.2024.140646

New Product Development (NPD) is a complex undertaking for any company, but where the initial stage of idea screening is what commonly determines the ultimate success or failure of a product. This important phase usually involves the evaluation of countless product ideas, each of which must be scrutinized for technical feasibility, commercial viability, and practicality. It can throw up many problems, not least because of the uncertainty inherent in predicting a product's market success based on early-stage concepts.

Research in the International Journal of Business Excellence has introduced a new approach to idea screening that could make it more reliable. Mahesh Caisucar of Goa College of Engineering and Rajesh Suresh Prabhu Gaonkar of the Indian Institute of Technology Goa in Ponda-Goa, India, have proposed an approach that addresses one of the key limitations in existing decision-making frameworks, particularly those used in Multi-Criteria Decision-Making (MCDM). MCDM techniques are used to evaluate and prioritize options based on various factors, each of which may hold different levels of importance. However, these weightings can often be skewed inadvertently and so lead to poor decisions.

The new approach uses a hierarchical ranking system that takes into account the relative weight of each option by considering how it stacks up against the sum of all other ratings. This, the researchers suggest, offers a more subtle perspective on how likely a new product is to be successful. The team has undertaken tests on their hierarchical approach that works across five main criteria: design, manufacturing, cost, ergonomics, and handling. This gives them a ranking method for obtaining an overall performance score for each product idea.

The team suggests that the success of their approach could improve the ability of a company to choose product ideas most likely to be successful in the market.

Caisucar, M. and Gaonkar, R.S.P. (2024) 'A novel hierarchical ranking method for idea screening in new product development', Int. J. Business Excellence, Vol. 33, No. 4, pp.585–601.
DOI: 10.1504/IJBEX.2024.140591

Research in the International Journal of Agile Systems and Management has investigated the relationship between people and their environment, with a particular focus on food. The research by Ysanne Yeo and Masahiro Niitsuma of the Graduate School of System Design and Management at Keio University in Yokohama, Japan, suggests that standard approaches to analysing human behaviour need an upgrade. They suggest a more holistic view that recognizes the complexity of human systems is needed. The work could lead to a change in the way we design social systems and behavioural interventions.

Traditional methods of studying human behaviour often break down complex systems into separate components. This has the unfortunate side effect of ignoring the interactions seen in real-world situations, and so can result in fragmented understanding that then leads to interventions that do not take into account all the issues underlying that situation.

The new study adopts a model-based systems approach to bring together different aspects of human behaviour and to create a more comprehensive framework for studying them. This, the researchers suggest, should allow a better understanding of the various factors that affect attitudes to healthy eating or otherwise. This could then be used to guide how policymakers and healthcare providers encourage healthier eating habits in a way that does not lead to unintended consequences. The likes of “calorie counting” and “dietary restrictions” are often at odds with the body’s natural signals of hunger and fullness and so more holistic, sustainable, interventions might emerge from this new understanding.

The work points to the need for a more collaborative and nuanced approach to designing social systems that takes into account the knowledge inherent in any human system. This kind of knowledge can play an important role in how people interact with their environment. Understanding the factors involved could help us create environments that better support long-term positive outcomes for individuals and society as a whole.

Yeo, Y. and Niitsuma, M. (2024) ‘Proposal of an integral model of human-food interaction: insights for social systems design’, Int. J. Agile Systems and Management, Vol. 17, No. 5, pp.48–72.
DOI: 10.1504/IJASM.2024.140464

The business environment is constantly changing, and sometimes does so very rapidly. Research in the International Journal of Agile Systems and Management, discusses how Agile Portfolio Management (APM) has emerged as a useful approach to allow companies to align their organizational strategies with the demands of this dynamic and complex environment.

Conventionally, portfolio management has relied on predictive methods that work across a range of project sizes and levels of complexity. However, as businesses increasingly adopt agile methodologies – originally designed for small, closely-knit teams – there has been a shift in portfolio management practices. Indeed, this shift has become necessary for continued success. Agile methodologies emphasize flexibility and responsiveness and work well with small-scale projects but can be problematic when they are used for larger, more complex portfolios.

Kwete Mwana Nyandongo of the School of Consumer Intelligence and Information Systems at the University of Johannesburg in South Africa, has demonstrated that scaled agile frameworks, which have been developed to manage large-scale implementations, offer some value, but even these are often inadequate. He found that this is especially true in industries, such as information technology, where rapid technological change and complex project interdependencies are the stock-in-trade of the industry.

Nyandongo's study goes on to suggest that these frameworks, while useful for large solutions, do not fully address the challenges of managing an entire portfolio in a rapidly changing environment. He says that this shortfall may lead some organizations to struggle with effectively implementing their strategies or responding to new opportunities and facing up to emerging risks.

The answer lies, the study suggests, in taking an even more flexible approach to portfolio management. That approach needs to extend the capabilities of existing scaled agile frameworks and to bring together traditional and agile methods. Such a hybrid approach might better accommodate the deliberate strategies of long-term business plans, as well as exploit the short-term nature of emergent opportunities.

In other words, organizations need to recognize that the methods effective for managing individual projects or even large-scale solutions may not translate directly to managing an entire portfolio. Instead, they must be yet more adaptable than ever.

Nyandongo, K.M. (2024) 'Relevance of scaled agile practices to agile portfolio management', Int. J. Agile Systems and Management, Vol. 17, No. 5, pp.1–47.
DOI: 10.1504/IJASM.2024.140478

Journal news

Associate Prof. Debiao Meng from the University of Electronic Science and Technology of China has been appointed to take over editorship of the International Journal of Ocean Systems Management.

Prof. Yixiang Chen from East China Normal University has been appointed to take over editorship of the International Journal of Big Data Intelligence.

Inderscience's Editorial Office has announced that the International Journal of Computational Systems Engineering is now an Open Access-only journal. All accepted articles submitted from 15 August 2024 onwards will be Open Access and will require an article processing charge of USD $1600. Authors who have submitted articles prior to 15 August 2024 will still have a choice of publishing as a standard or an Open Access article. You can find more information on Open Access here.

Dr. Luigi Aldieri from the University of Salerno in Italy has been appointed to take over editorship of the International Journal of Governance and Financial Intermediation.

Newly announced title: International Journal of Artificial Intelligence in Healthcare

The International Journal of Automotive Technology and Management is the latest Inderscience title to be indexed by Clarivate's Emerging Sources Citation Index.

The journal's Editor in Chief, Dr. Giuseppe Giulio Calabrese, had the following to say:

"Reaching this remarkable milestone is a testament to the hard work, dedication and innovation of each and every IJATM board member in contributing to our mission of issuing an outstanding academic journal in industrial organisation and business management.

The goal of IJATM is to publish original, high-quality research within the field of the automotive industry. Our editors actively seek articles that will have a significant impact on theory and practice. IJATM aims to establish channels of communication between policy makers, executives in the automotive industry, both OEM and suppliers, and related business and academic experts in the field.

IJATM has come a long way, but we still have a lot to accomplish. We have ambitious goals and exciting opportunities ahead of us. I am confident that with the talent and passion of our board members, authors and reviewers, we will continue to grow and improve the indexing status of our journal."

Inderscience's Editorial Office is delighted to report that Electronic Government, an International Journal has been indexed by Clarivate's Emerging Sources Citation Index

The journal's Editor in Chief, Dr. June Wei, would like to take this opportunity to express her deep appreciation to her Editorial Board Members and to Inderscience's Editorial Office staff. She says, "It is all their hard work and great support over the years that's brought Electronic Government the success of being indexed in Clarivate's ESCI."

Clarivate has recently released its latest impact factors, and Inderscience's Editorial Office is pleased to report that many Inderscience journals have increased their impact factors, particularly the European Journal of Industrial Engineering, International Journal of Knowledge Management Studies, International Journal of Applied Pattern Recognition and International Journal of Human Factors and Ergonomics.

Impact factors are displayed on all indexed journals' homepages. We congratulate all the editors, board members, reviewers and authors who have contributed to these latest indexing achievements.

Our newsletter