International Symbol Recognition Contest at GREC'2003

Call for Expressions of Interest

This questionnaire aims at interviewing potential participants of the symbol recognition contest, in order to determine their wishes about application domains, data sets, allowed deformations, performance evaluation criteria, contributions, etc.

Please fill as much as you can of this questionnaire, with answers as accurate as possible. Consider it as an open canvas, and do not hesitate to explain your expectations with such a contest.

Thanks in advance for all useful feedback and contribution you can provide for the organization.

Personal data

Phone number:

Application domains

Is your method restricted to a single domain?

Yes No

In that case, which domain? Otherwise, on which application domains do you actually work?

Electronics Maps Logos
Engineering Musical scores Tables & diagrams
Architecture Mathematical formulas Other (please, specify)

On which other domains do you think your method would also work?

Electronics Maps Logos
Engineering Musical scores Tables & diagrams
Architecture Mathematical formulas Other (please, specify)

Among all the domains, could you specify which are of most interest to you?

Electronics Maps Logos
Engineering Musical scores Tables & diagrams
Architecture Mathematical formulas Other (please, specify)

Data sets

Would you be willing to propose some images/other data of your application domains, containing typical symbols you attempt to recognize?

Yes No

Would these images be free of use for the contest (i.e. only contest use allowed)?

Yes No

Could these images feed a corpus usable by the scientific community (i.e. all uses allowed)?

Yes No

Is a ground truth available for these images?

Yes No

What are the format and the resolution of these images, or the format of other types of data?

Do you have several images/data sets for the same domain?

Yes No

How many?

With how many different symbols?

Vector data

Could your method directly work on vector data?

Yes No

Do you produce your own vector data? Do you work on third party vector data?

Own Third party

What are the different kinds of primitives used in your data?

Lines Arcs Other:

What are the attributes associated with these primitives?

Do you handle filled vectorial primitives?

Yes No

Segmentation & some other constraints

Is your method able to recognize non-segmented symbols?

Yes No

From a bitmap? From a vector description?

Does your method need some constraints, or a constrained environment, to be performed on a given type of data?

Yes No

What are these constraints?

Degradation & Transformations

Is your method able to work on noisy/distorted images/data?

Yes No

What kind of noise/degradation?

Do you use degradation models to evaluate your method?

Yes No

What kind of models?

Is your method invariant to rotation?

Yes No

What kind of rotation?

Scanning skew Any arbitrary rotation

Does your method support other transformations?

Yes No

Are these transformations only possible in combination with some constraints (e.g. scaling when non-segmented symbols, etc.)? Please, explain.


Does your method need training before the recognition step?

Yes No

What kind of data (bitmap? vector? degraded?) do you need use for the training phase?

Do you have some additional requirements (quantity, quality, resolution, etc.)?

What is the minimal number of instances your method absolutely requires for a given model?

Do you have to consider other constraints for training?

Performance aspects

What is the order of time factor required by the execution of your method (please specify data size)?

Is your method executable in a "reasonable" time (i.e. can it be evaluated during a contest)?

Yes No

Is your method able to support scalability (i.e. the ability of the method to work when the number of symbols to recognize increases)?

Yes No

So do you have some assessment of the complexity of your method?

Do you have a personal evaluation of your method on some criteria?

Yes No

What are these criteria?

Any further comments?

Thank you very much for your collaboration. Please, send the form by pressing the submit button below.