Questions related to Computer Science
I am currently undertaking a computer science with cyber security MSc and have been trying to find topics of study for the independent research project which would be interesting to me and I am struggling and am looking for a pointer in the right direction.
I would like to undertake something that is in some way practical in nature to keep it interesting, and the topic has to include some elements of security.
My interests through work are mainly developing serverless applications on AWS and event driven applications and cloud computing but I am really struggling to find something that is both interesting and isn't purely research based and theoretical.
Any pointers would be gratefully received, I still have another four months until I need to write the project proposal but I have already been trying to find something that sparks my interest for the last month or two with little success.
We are trying to measure empathy in the Software Engineering domain and based on our investigation most of the available scales are designed for and used in psychology or medical domains. It would be very helpful if you can share any empathy scale which is developed for or which is used in software engineering domain.
I am looking for a scale which is developed specifically for software engineering domain or which has been used previously in software engineering domain or a scale which assess intergroup empathy.
I have seen interesting studies on energy which use Machine Learning algorithms. As I have a mechanical engineering background I am not sure if I can learn and use machine learning. Is it required to have a computer science background? And are the available tools for machine learning easy to use by people from other disciplines?
A very interesting topic, "quantification of randomness" in mathematics it is sometimes reffered to as "complex theory" (although it is more about pseudorandom than randomness) that is based on saying that a complicated series is more random and then there are tests for randomness in Statistics and perhaps the most intriguing test related to information theory -"entropy"(as also being of relevence to and result of second law of thermodynamics), while there are also random numbers generators (pseudorandom numbers generators) and true random numbers generators using quantum computing.
So, what I've been trying to, is making a complete list of all available algorithms or books or even random number generators that will allow me to tell me how much random a series is, allowing me to "quantify randomness".
There are 125 unique infinite series which are pseudorandom that I have discovered and generated based on a rule, now how do I test for randomness and quantify it? Uf the series is random or there is probably a pattern, or something that will allow me to predict the next number in the series given I don't know what the next number is.
Now, do anyone know of any github links based on any of the above? ^ (like anything related to quantifying randomness in general that you think will be helpful).
A book/books on quantifying randomness will be very very helpful too. Actually anything at all...
I graduated with a Master's degree in Computer Science and I have been working as a software engineer for 3 years.
At the moment I am planning to do my PhD in industry. I have some difficulties finding a topic.
So the research area would be requirement engineering, system engineering and cloud computing.
Do you have some topic suggestions or useful resources?
Thank you in advance.
It is a long-cherished dream to publish my research paper to the scientific reports-nature as a researcher. I know this is not a good question, but I want to know in detail. My concern is that review articles are indexed in well-cited journals as far as I know because a review article is a combination of many of the state-of-the-art technique's insights. Although I have some research papers in the well-cited Q-1 journal and at the moment, I am thinking of submitting an article to the scientific reports-nature. It would be nice for me if you could give some detailed information about what kind of work is accepted in the computer science domain in nature scientific reports journal and whether the review article can be submitted.
I recently done a research project where I implemented AI application that applies on agriculture field to automate the manual stuff. Now I want to submit it to a good journal that matches that inter-discipline. If anyone suggest me some good journals for this, it will help me a lot.
I'm creating a CS k-12 scope and sequence and want to know what factors will increase enrollment.
I got a message in my Inbox. Stating this:
"Thank you for getting back to me. My colleagues and I are applying for an immigration plan for the USA; it is called NIW, if you have heard of it. And for this plan we need to increase our citation to be eligible for a USA green-card. Because of this we need your assistance to cite our article in your future articles if you feel they are related to your field and worthy to be mentioned in your work. It is worth mentioning we have colleagues in different fields including engineering (environmental, mechanical, computer science, etc. ..), management and many others; therefore we can provide you with related articles to your field of research. Please let me know what you think of this offer, and if it is fine with you we can discuss the incentives."
Is anyone aware of this? How to report these kinds of people? and what is this strategy?
My name is Philips Sanni. I am currently rounding up my MSc degree in Software Engineering and am currently in search of a university where I can study for a Ph.D.in a related field but most preferably in the area of Artificial Neural Networks.
And if you are a professor in a need of a Doctoral Student, kindly send the details of your research and how I can apply to your university.
As we know the demand for core branches like mechanical and electrical is slowly diminishing, I am really curious to know how far information technology will go or when it will start to decline.
Which Q1 and/or Q2 research journals for computer science and precision agriculture area are most suitable for speedy review and publication process? Free of charges Journals are preferables.
Taking into consideration that Australia has an active but, nevertheless, relatively small academic community, are there any signs that 'clinical' cognitive science research in Australia has really broken the barriers between the three traditionally involved disciplines, computer science, psychology and brain-related medicine, especially with the latter of the three (i.e., beyond one-off instances, or minority collaborations)?
Dear friends, I've prepared several papers to publish but because of the relatively high fees to publish in Open Access journals, and regarding my limited budget to do this, I decided to look for some lower fee or free of charge journals to publish my manuscripts as soon as possible. So I'll be so grateful if anybody can suggest me journals in the fields mentioned above.
Currently, due to the data privacy and security concerns, there are less institutes willing to share their data. What can be done by both medical and computer science domain to deal with the issue?
I developed an approach for extracting aspects from reviews for different domains, now I have the aspects. I want some suggestion on how to use these aspects in different applications or tasks such as aspect based recommender system.
Note: Aspect usually refers to a concept that represents a topic of an item in a specific domain, such as price, taste, service, and cleanliness which are relevant aspects for the restaurant domain.
We have a Computer Science and Communication Journal in the college (Journal of Computing and Communication ). We target to publish Research articles in all the disciplines of Computer Science and Communication. We plan to publish two issues per year. Can anyone tell me the ways and means to index the Journal in the google Scholar
I wonder why researchers in Computer Science and related fields don't like sharing their source code. Making papers source code open on platforms like Github would go a long way in helping upcoming researchers advance the paper and makes comparison with other works fair, since there might error in someone re-implementing the work on that paper in the process of making a comparison.
I am in the third year of Computer Science and my project is about social robots, I am doing the proposal, ethics form, and literature review about the social robot. The question above is in the ethics form.
I want to check whether the ( Journal of Advances in Mathematics and Computer Science ISSN: 2456-9968 ) is a fake one or not? I could not find it in the Clarivate Analytics list.
Thanks for ur help!
1- what are the best book, materials, program to teaching "analysis system and design" subject for computer science students?
2- what are the main points that students must know in this subject?
3- how can I divide this subject to 10 weeks and achieve a good result from students?
4- how make teaching this subject is interest?
5- I want source for practical part of the subject to make student apply what they learned
As a newbie, I want to know that what are the key parameters to find whether a specific journal is fake or real?
I mean to publish a paper what parameters we should check to publish your research work in Pakistan?
I want to dive deeper into data analytics, I have done quite a few basic projects e.g get a random dataset, clean/analysis it, and then make visualizations. However I want to conduct a more challenging project, e.g some web scraping, maybe using SQL and Python to analyze data and visualize it, to help solve some real-world potential scenarios. But as a typical computer science student, my brain isn't creative and struggling to think of valid ideas. One idea I had is to try to gather uber driving data at my uni and try to store this data into a SQL database and do some cleaning and analysis and try to visualize more busy spots etc.
Would be much appreciated to gather some ideas from those established in this field! thank you.
Which Q1 and Q2 research journal of computer science and cybersecurity area journal are most suitable for speedy review and publication process preferably not the paid journal?
I am studying Computer Science and I am currently working on my Bachelor thesis. For that, I am looking for suitable datasets. My goal is to apply Process Mining to these datasets to identify and analyze interesting processes. However, the problem is that these datasets need to be in a certain format to be suitable for Process Mining. The data needs to have a Case Id, Activity, and Timestamp column. In other words, the data needs to be activity-based so that processes with different activity sequences can be found.
I wanted to ask if someone has any idea where I could find such datasets? I'd be most interested in datasets in sectors such as energy, waste management, public work (but other input would be helpful as well). So far I mainly could find the datasets from previous years' BPI challenges.
Here is a short page with more information about Process Mining and the desired format (including a brief example):
Any feedback would be highly appreciated.
Thanks in advance,
If I create a website for customers to sell services, what ranking algorithms I can use to rank the gigs on the first page? In order words like Google use HITS, and pagerank to rank webpages, when I create my website, what ranking algorithms I can employ for services based website?
Any assistance or refer to scientific papers that can help me?
How possible can a researcher in the computer science, software engineering school, or Information Technology related field suggest a research topic for a research student in business school especially marketing?
Recently, as a Ph.D. candidate, I have been confronted by some friends who are in business school studying Management, Marketing, etc. to suggest some research topics for them.
I would like to know the possibility of an I.T student to give a topic to such students.
I'm conducting a research on digital literacy and its linkage to the digital economy in a developing country like Pakistan.
I'm looking for experts in the following areas: economics, literacy, primary acedemia, digital economy, entrepreneurship, digital literacy, computer science, computer engineering, IT, as well as other associated fields.
I would be really grateful if you could take some time out to fill my questionnaire survey.
This quetionnaire corresponds to my first area of focus: impact of digital literacy on the digital economy.
For context: the digital economy embodies all economic transactions that either require the use of digital technologies or are related to the selling & purchasing of digital goods & services.
For the scope of this study, digital literacy has been defined through some key competences as outlined by the UN in their Digital Literacy Global Framework. The purpose of this study is to determine whether there is a relationship between digital literacy and growth in the digital economy. Furthermore, this study aims to map the relationships o fthe competencies of digital literacy against the factors leading to growth in the digital economy.
For any queries and concerns, you may reach out to us via email at email@example.com
There are high-status conferences such as NeurIPS, ICSE, and ACL. Sometimes they accept more than 1000 papers each year.
On the other hand, there are several Q1 journals (with high impact factors) in each category.
Based on your experience, what would be the pros and cons of each one for you as a researcher? How well they are received when you are applying for a position?
I have started my Master's degree in software engineering a few months ago, and I am looking currently for trends and hot topics in the software engineering area.
I would really appreciate any suggestions for my thesis topic
Suppose I want to teach beginner first- and second-year university students, possibly as a first programming language, or they maybe have some programming background in another language. What Python programming textbooks do you suggest?
Also, what Python programming textbooks do you suggest for teaching advanced topics?
The template of this journal keeps throwing errors of the kind: "There is no line to end here" and therefore cannot run. Anyone help me sort out this challenge. Or alternatively grant me a running template. Thanks
The question of how computers can contribute to controlling the COVID-19 pandemic is being posed to experts in artificial intelligence (AI) all over the world.
AI tools can help in many different ways. They are being used to predict the spread of the coronavirus, map its genetic evolution as it transmits from human to human, speed up diagnosis, and in the development of potential treatments, while also helping policymakers cope with related issues, such as the impact on transport, food supplies and travel.
But in all these cases, AI is only effective if it has sufficient examples to learn from. As COVID-19 has taken the world into unchartered territory, the "deep learning" systems, which computers use to acquire new capabilities, don’t necessarily have the data they need to produce useful outputs.
In the last week, one of my journal got rejected from International Journal of Human-Computer Interaction. Now I want to resubmit it in another journal. If anybody suggest me a Q2 or Q3 journals. The title and abstract of the journal is given below:
Title: Dynamic User Experience for efficiency enhancement based on facial expressions
Abstract: The main motive of Human-Computer Interaction is to make human comfortable while working with interactive computing device so that it can increase human efficiency and release the human trouble and saves humans’ time. In this paper, we recognize the face first then change the UI automatically based on his facial expression. Some of our personas also proposed the similar idea of building a system that would play music based on their facial expression. These scenarios gave us the idea of making an integrated system of dynamic user experience based on facial expression. So, we started to collect the data of our paper-based on questionnaires and interview approaches. Then made some low fidelity prototype during requirement gathering phase. We also made some high-fidelity prototypes using Axure RP to show the stakeholders the likely output of this paper. In the new phase, we have used the software engineering model and then implemented our code in visual studio with live server extension. Then we have followed the cognitive walkthrough model as our evaluation method. During the evaluation, our stakeholders need not have provided any input manually and it was easy to earn to use the system. We found that there should be a high-speed internet connection and we have to use VPN to handle some issues. The user was not feeling fatigued or discomfort at all because it is very easy to learn. Anyone can use it and who wants to use it just need to be in front of the camera that’s it. So, the user was very much comfortable and happy to use our system.
Thanks in advance.
Dear Researchers, Modellers, and Mathematicians,
As we know that in mathematics, computer science, and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system. A deterministic model will thus always produce the same output from a given starting condition or initial state. In this regard, I am looking forward to having examples from daily life events which are deterministic. Thank you!
I have an idea for a scientific article and I have prepared the practical part in it, and I am now calculating the results.
Whoever finds the ability to write the article (the theoretical part) well and in record time, please write to me to publish it at a conference (the last date for receiving research will be on 8-20 2021).
Research trends: data security
I'm looking for Ph.D. programs (Scholarships) in Europe/USA/Canada/Australia/Great Britain.
Professors who are looking for Ph.D. candidate, I'm ready to work with any new subjects in Computer Science Field and especially in Deep Learning/Machine Learning.
I really appreciate your help!
My friend and I wrote a research paper on computer science (cryptography). The article is a simple contribution. We need someone to join us and do a good grammar correction for the theory part. In addition, an improvement may be proposed to the proposed contribution.
Journals with review time of 2-4 weeks and publication time of <6 months.Impact factor journals >1.
With the increasing importance and implementation of computer applications in modern agriculture,
Should Agricultural universities make thier students sufficient in advanced computer knowledge or should they have depend only on pure computer scientists ??
Who will fill the gap between agriculture and computer ?
I am writing a review article on facial analysis in biomedical data science. As part of this there is a section on automatic and manual facial landmarking.
There is a large literature on this topic in computer science, but it is usually more focussed on computer vision than medical applications.
I am wondering if there exists established and respected software for automatic landmarking of facial images in a biomedical context? Any help is much appreciated.
- What are the hot topics and future topic for the Recommendation System?
- Can we publish High Impact Factor Papers on Recommendation System?
- Any other topic suggestions for PhD Computer Science that benefit the student in long term?
Hello Researchers, I would like to know some Q1 paid journals with fast publication in the field of computer science major?
I have a question, we want to use the application of ANNs in regression analysis and this is some sort of easy utilization for ANNs, but the question is " how many samples do we need to training? using 12 samples could be enough? " I produced these 12 samples by Fractional Factorial Design (FFD) method and I need to be sure about this. Therefore, I would be grateful if you could provide me with any information about this subject.
Many thanks in advance for your time and kind consideration.
Reference for FFD method: https://en.wikipedia.org/wiki/Fractional_factorial_design
I'm interesting in to know if you have had educational experiences with m-learning (mobile learning) in engineering or computer science. In the literature, it seems to be that there are few proposals with m-learning platforms or applications to strengthen the skills or competences of the students in the basis of engineering and even computer science. Typical applications are located, e.g., in EFL and math.
Under this context, What have been your experiences with m-learning in engineering education or computer science?, and What advantages or difficulties do you consider regarding m-learning?
Thank you for your answers.
Is International Journal of Advanced Computer Science and Applications IJACSA indexed in Scopus?
Does Journal have a sjr?
I have three queries mentioned below:
- I have submitted my manuscript around two months ago at "COMPUTERS AND ELECTRICAL ENGINEERING, AN INTERNATIONAL JOURNAL". The average number of weeks to the initial decision is 4-7 weeks (as written on a Journal webpage) and it's been around 11 weeks and still, I have not got any updates from them. Please let me know what can I do next?
- Also, is it possible to submit the same manuscript to more than one journal at once to save time? If any of the journals will reply me, I will withdraw the manuscript from other journals.
- Please let me know some free Scopus Indexed Journals related to Computer Engineering/Computer Security which will take little time to make decisions because I am in a hurry. It is my first time publishing a paper and I want it to be published anyhow at the earliest (because I need to go abroad and the deadline for scholarships is approaching).
During the research process in computer sciences, we need a set of tools such as:
- IDE for algorithms coding
- LaTex editor for writing papers
- Statistical toolkit for experiments
- Some CAD tools for design
From your point of view, which tools should I use for research in computer sciences? Could you please, provide examples of these tools?
Can you guess which one is the most mysterious and enigmatic physical thing among these things such as biological cells, light, elementary particles (e.g. electrons, neutrons or protons), viruses, fungi, bacteria, atoms, chemical compounds, biological cells, blood cells or finally plain old components, in the context of engineering paradigms (e.g. mechanical, electronics, or aerospace) for designing and building large products (e.g. cars, airplanes, computers, factory machinery or spacecraft)?
The greatest tools for acquiring and using knowledge for technological progress and great inventions are (i) scientific method and (ii) mathematics, where these two tools provide complementary perspectives for gaining deeper insights. Each act like a light to illuminate mutually complementary sides, perspectives or dimensions. Since software researchers refuse to use scientific method (i.e. light of science), software community wasted 50 years and failed to solve software crisis and ended up with a useless fake CBE-paradigm.
If fake scientists still don’t realize that it is a mistake to blatantly violate scientific principles, they are going to repeat same kind of mistakes for Artificial Intelligence research and development. Many things would stay enigmatic and end up in a crisis, like software crisis. Many things that are inexplicable and puzzling or enigmatic in the perspective of mathematics can become crystal clear from the scientific perspective, since light of scientific method illuminates the dark spots left by light of mathematics.
Today, greatest enigmas for researchers of software and computer science include answers to following simple questions such as what is meant by a component in the context of all the other engineering disciplines, and what is meant by CBE (Component Based Engineering) that successfully eliminated engineering crisis form designing and building large and complex products (unlike software crisis).
Even if we know just 30% about bacteria or viruses that has been documented in the textbooks, each and every piece of knowledge can only be included in the textbooks, if and only if the piece of knowledge is supported by falsifiable proof. It impossible to find a piece of knowledge that is not supported by a falsifiable proof. There is a possibility that 20% of the knowledge in the textbooks might be falsified by finding counter evidence in the future such as new discoveries or empirical evidence.
Since mankind have enough valid knowledge about things such as bacteria, light or electrons, researchers are able to invent great things such as treatments for many kinds of infections, fibre-optic networks or semi-conductor chips respectively.
On the other hand, none of the knowledge about the components in the textbooks for computer science or software is either tested (e.g. no one challenged) or supported by any falsifiable proof. But there is a possibility that up to 20% of the knowledge might be proven valid in the future. However, I am sure that 80% of the knowledge in the textbooks is invalid and not open for challenge.
Even simple things such as what component is and what is meant by CBE stayed an enigma and mysterious for many decades, since knowledge in the textbooks about components is untested and invalid. Fake scientists at NSF ( that I prefer to call National Fake Science Foundation) feel offended, if anyone challenges their myths about so called components.
Anything would be less enigmatic or mysterious, even if we only have 30% valid knowledge than another thing that has huge knowledge, but significant portion of the knowledge is invalid. Hence, plain old components are far more mysterious and enigmatic than the invisible things such as viruses, electrons and biological cells. We made many useful inventions by even by relying on the limited valid knowledge.
Can you name any physical thing on the Earth that is more mysterious and enigmatic for scientific community than plain old components used for designed and building large Component-Based Products (or CBP), by taking into consideration all the knowledge in the published scientific literature and textbooks for all scientific disciplines?
A thing must be the most mysterious and enigmatic, if there is a large BoK (Body of Knowledge) for the thing and if larger percentage of the BoK is invalid (e.g. untested and unproven). The main reasons that makes anything enigmatic is not just lack of sufficient valid BoK but also having large chunks of invalid knowledge.
Isn’t it fascinating? Even such simple to acquire knowledge would stay mysterious and enigmatic (and creates a paradox and crisis), if researchers refuse to use the light of scientific principles to illuminate dark spots that are in the realm of science, since such dark spots can’t be illuminated by the light of mathematics.
I invented solutions for software crisis by gaining scientific knowledge essential for understanding mysterious components essential for achieving the elusive and enigmatic CBE-paradigms, in the context of all the other engineering disciplines. The fake scientists of computer science foolishly refusing to use light of scientific method.
The NSF that supposed to uphold scientific principles and scientific method, but is breaking scientific principles, protocols and code of conduct for scientific discourse, which is essential for progress of science and technology. Any accepted theory (i.e. theory or concepts derived from the theory that are being used by practitioners of any craft or trade) must be treated as an assumption, if the theory is not supported by a falsifiable proof (that is backed by repeatable evidence and/or verifiable facts).
The practitioners of astronomy or astrology had been practiced their trade or craft until 16th century by relying on the 2300-year-old theory “the Earth is static at the centre” (and concepts or observations derived from the theory). Mankind falsely concluded that “the Earth is static at the centre” is self-evident fact, so no one bothered to support this unproven theory by finding a falsifiable proof.
Since there was no falsifiable proof for such core first-principles in the foundation, it was impossible to challenge the huge BoK (Body of the Knowledge) acquired and accumulated for 1800 years for creating the dominant paradigm until 16th century by relying on such core first-principles. The scientific community in dark ages used illegal circular logic to defend the core first-principles.
For example, they used the observable facts such as epicycles, non-uniform speeds of planets, lack of stellar parallax and retrograde motions to defend the presumption “the Earth is static at the centre”. Countless concepts, observations and other derived theories in the whole BoK that had been accumulated for 1800 years can be used to defend the belief “the Earth is at the centre”.
The scientific method, protocols and processes for discourse has been created and perfected to prevent this. The biggest problem to subvert a flawed dominant paradigm is overcoming the illegal circular logic, which rely on the huge BoK acquired and accumulated for the paradigm. This kind of thing can be prevented by having falsifiable proof for the core first-principles at the foundation of any dominant paradigm.
When there is a falsifiable proof and if the theory is flawed, it is straight forward to falsify the proof by finding one or more verifiable and/or repeatable counterevidence. This is the reason the scientific method is created, which requires that each theory must be supported by a falsifiable proof.
Unfortunately, today software researchers and experts using the huge BoK in the textbooks and published literature that has been acquired and accumulated for past 50 years by relying on untested and unproven core first-principles in the pre-paradigmatic foundation such as about so called components for software and computer science is a branch of mathematics etc.
About 80% of the accumulated knowledge we have in textbooks and other published literature about the components for software is untested, unchallenged and invalid. Having invalid knowledge makes anything enigmatic, mysterious or paradoxical. Anything would become more and more enigmatic, mysterious or paradoxical, if it acquires and accumulates more and more knowledge and if larger and larger percent of the knowledge accumulated is invalid.
Every piece of scientific knowledge for any physical thing in the textbook must be well tested, challenged, and musty be supported by falsifiable proof backed by empirical evidence that must be open for challenge. Scientists of computer science must be ashamed of them-selves, if they feel offended by counter evidence or facts to expose untested or unproven knowledge about the enigmatic components.
Isn’t it pathetic, if the NSF (National Fake Science Foundation) don’t know or can’t understand basic scientific principles, processes and basic code of conduct? I oppose passing “The Endless Frontiers Act (S. 3832)” to fund the Fake Science foundation, until fake scientists at NSF understand basic scientific principles and processes and strictly implement the code of conduct for upholding the truth.
I wish to file a court case to block the act (i.e. The Endless Frontiers Act) to prevent tens of billions of dollars flush down the drain by the fake scientists at CISE, since nearly 50% of the US$100 billion goes to the CISE of Fake Science Foundation.
I am doing a computer science dissertation on the topic '' Automate text tool to analysis reflective writing''.
The hypothesis set is ‘To what extent is the model valid for assessed reflective writing?’ I just want from the questionnaire( closed ended questions and one open question) to validate the proposed model.
I have used the used the 5 point likert scale for analysing the data, option given strongly agree, agree, neutral, disagree, strongly disagree. The sample size is 10 participants. I have chosen my participate based on their experience, career and knowledge of the reflective writing.
1) Which statistical analysis tool shall I use to analyse 10 sample size to validate the model? Please show me step by step on how to analyse the data?
2) What would be the associated hypothesis?
3) Can I use Content Validity Index with 10 sample size participants on the questionnaires using 5 point likert scale?
4) this step on my research Is it qualitative method or quantitative method?why?
If you have any suggestion on my hypothesis, the sample size and the tool I need to analyse?
Thank you in advance !
I would like to start a discussion on which index is more reliable, H-Index or i10-Index. Both are usable, however their ways of calculation are different. There is also G-Index. I am not asking on the differences but on their reliability. Welcome to any comments.
I have heard conflicting answers on this ranging from "do it to make your research accessible" to "only do it if you're invited" to "don't do it at all." The most moderate advice I saw was "one or two is fine as long as you have several other journal publications."
If the answer depends on my field, I'm in computer science and software engineering.
Rodgers’ evolutionary concept analysis is being used in Nursing field. I could not find any paper that prove that Rodgers’ evolutionary concept analysis has been used in any other fields other than Nursing.
Is it possible if i use it for computer science field ?