Do large test methods indicate a code smell?
I have a particular method called TranslateValues() (Cyclomatic-Complexity of 5) which I would like to test.
What does it mean when we say that some function is polynomially bigger/smaller than some other function?
I was going over this video lecture on master theorem from Introduction to Algorithm and while explaining case A of the master theorem professor says that some function f(n)
is polynomially smaller than some other function at point 53:08 seconds:
What does it mean by expected running time and average running time of an algorithm?
Let’s say we want to analyze running time of algorithms. Sometimes we say that we want to find the running time of an algorithm when the input size is n and for the worst possible case it is denote it by O(n). Sometimes though I see books/papers saying that we need to find the expected time of an algorithm. Also sometimes the average running time is used .
Programmaticaly finding the Landau notation (Big O or Theta notation) of an algorithm?
I’m used to search for the Landau (Big O, Theta…) notation of my algorithms by hand to make sure they are as optimized as they can be, but when the functions are getting really big and complex, it’s taking way too much time to do it by hand. it’s also prone to human errors.
Complexity of a web application
I am currently writing my Master’s Thesis on maintainability of a web application. I found some methods like the “Maintainability Index” by Coleman et.al. or the “Software Maintainability Index” by Muthanna et.al.
For both of them one needs to calculate the cyclomatic complexity. So my question is:
Why the bound of a linear function is same as that of a quadratic equation
I am learning algorithms, and I came across something very interesting.
How can I compute the Big-O notation for a given piece of code? [closed]
Closed 12 years ago.
Using Completed User Stories to Estimate Future User Stories
In Scrum/Agile, the complexity of a user story can be estimated in story points. After completing some user stories, a programmer or team of programmers can use those experiences to better estimate how much time it might take to complete a future user story.
Computational Complexity of Correlation in Time vs Multiplication in Frequency space
I am working with 2d correlation for image processing techniques (pattern recognition etc…). I was wondering if there is a theoretical approach on how to tell when to use multiplication in frequency space over correlation in time space. For sizes of 2x frequency space is obviously faster but how about small, prime sizes like e.g. 11?
Is O(1) random access into variable length encoding strings useful?
I remember reading that there are no existing data structures which allow for random-access into a variable length encoding, like UTF-8, without requiring additional lookup tables.