What’s the reason of choosing PascalCasing over camelCasing or vice versa from a programming language design POV?
I like both but I notice languages that use camelCasing for members sometimes need more adjustments when you want to edit your code. For example (in Python):
Storing tokens during lexing stage
I am currently implementing a lexer that breaks XML files up into tokens, I’m considering ways of passing the tokens onto a parser to create a more useful data structure out of said tokens – my current plan is to store them in an arraylist and pass this to the parser , would a link list where each token points to the next be better suited? Or is being able to access tokens by index easier to make a parser for? Or is this all a terrible strategy?
Storing tokens during lexing stage
I am currently implementing a lexer that breaks XML files up into tokens, I’m considering ways of passing the tokens onto a parser to create a more useful data structure out of said tokens – my current plan is to store them in an arraylist and pass this to the parser , would a link list where each token points to the next be better suited? Or is being able to access tokens by index easier to make a parser for? Or is this all a terrible strategy?
Storing tokens during lexing stage
I am currently implementing a lexer that breaks XML files up into tokens, I’m considering ways of passing the tokens onto a parser to create a more useful data structure out of said tokens – my current plan is to store them in an arraylist and pass this to the parser , would a link list where each token points to the next be better suited? Or is being able to access tokens by index easier to make a parser for? Or is this all a terrible strategy?
Storing tokens during lexing stage
I am currently implementing a lexer that breaks XML files up into tokens, I’m considering ways of passing the tokens onto a parser to create a more useful data structure out of said tokens – my current plan is to store them in an arraylist and pass this to the parser , would a link list where each token points to the next be better suited? Or is being able to access tokens by index easier to make a parser for? Or is this all a terrible strategy?
Storing tokens during lexing stage
I am currently implementing a lexer that breaks XML files up into tokens, I’m considering ways of passing the tokens onto a parser to create a more useful data structure out of said tokens – my current plan is to store them in an arraylist and pass this to the parser , would a link list where each token points to the next be better suited? Or is being able to access tokens by index easier to make a parser for? Or is this all a terrible strategy?
Storing tokens during lexing stage
I am currently implementing a lexer that breaks XML files up into tokens, I’m considering ways of passing the tokens onto a parser to create a more useful data structure out of said tokens – my current plan is to store them in an arraylist and pass this to the parser , would a link list where each token points to the next be better suited? Or is being able to access tokens by index easier to make a parser for? Or is this all a terrible strategy?
Are multi-line comments a critical facility in a modern language?
I’m trying to convince the designers of a language that multi-line comments with an arbitrary start and end are important, and should be included. Currently there is only a “comment-to-end-of-line” primitive.
Are multi-line comments a critical facility in a modern language?
I’m trying to convince the designers of a language that multi-line comments with an arbitrary start and end are important, and should be included. Currently there is only a “comment-to-end-of-line” primitive.
If null is bad, why do modern languages implement it? [closed]
Closed 10 years ago.